As you should know some time ago we acquired the source code for a working exchange. So the question is really why have we not seen SONOX in a working state quite yet?
Well working means different things to different people. While yes it does run, there was a lot of work needed to make it into what we want to call SONOX. There were many problems we started to face.
Now as it might seem, our plan from the get go was never to do what some freelancers do for you, slap a theme on a system and pass it off. Even if we wanted to this it wouldn't be as straight forward as you expect.
For starters, the source was half way through a theme swap already, while some of the functionality was there, a lot of work was still needed to be done. There were even some core components still missing.
You can certainly forget taking things from scratch, the system was so proprietary to itself that we noticed pretty much right away that even maintaining what users see was going to be a struggle, let alone introducing new fronted systems to help us modularise and maintain in a fashion we are more familiar with in todays standards.
The source was written for ASP.NET, using C#, something where I may have been familiar with its syntax and features, others were not.
Generally my rule for choosing languages is stick to what you know best, pretty much anything can be done today with most languages, so as long as it can actually be done and it doesn't interfere with performance and its use cases then shoot for the stars.
While we have such a small development team, we don't want to push the boat out too far and venture into fully uncharted waters. Even a little bit of experience helps and certainly having another person whom you can peer review and ask questions I would say this is more important than using something only you are familiar with for both future proofing and your own motivation. At the end of the day your reaching the same destination just there are many paths you can take, some quicker than others, but its not all about speed.
Did someone say Legacy?
From the information given to us by the original developer, this system had only one person maintaining over the past five years, every single update or feature that had been written had been crafted by them. This wasn't necessarily a bad thing to start off with.
I could see from the get go that care had been made to stick to certain standards and amazingly most parts of the code were well documented (something that I personally have not come across much recently) even with the odd joke here and there to spice up your day.
This certainly isn't a bad thing is it? You may be thinking this and your correct, this is more like a developers dream than a nightmare. Unfortunately tied in with this system was 5 years worth of redundancies and unused references.
If it wasn't for a code usage highlighter in my IDE I wouldn't have initially noticed the amount of redundant code. After tiding up what we had received we noticed that we managed to remove almost 50% of unused and redundant code (Mostly unused classes and display structure)
Now all props to the original developer this is not a hit on them at all, but when there is only one developer you tend to know in and out how everything works (even after this amount of time you could blindfold yourself and open up and work on your intended job) and usually after a project this old, without receiving detailed deliverable documentation, it doesn't neccisarily mean that any amount of code comments will explain your intended meaning or how the feature works.
Just like any older architecture, when you want to change core features a lick of paint isn't really going to cut it, especially when you want to innovate.
After a bit of diving into the source, it started to become obvious how the code splits up. This however still ran under one process and as far as I could see was not easily scaled nor could it be conditionally prioritised for each task.
Taking what we could, we came to conclusion that this will not fit the job in how we want to (and scaling easily is important to this type of system) I managed to design a system where theoretically the exchange is made up of lots of smaller *microservices.
This sudden modularisation allows for easily scaling and with careful design of each of these components allows for greater expansability. Not only are we no longer tied to one platform and one language, but if create a better tool to outperform our current implementation we can, with some configuration, easily hotswap, leading to little if any downtime. Better for the user and better for us!
Correct tools for the job
Sometimes as a developer you are tied to the tools you have, whether this be out of necessity (requirements specification) or experience. I am however a firm believer in using the correct tools for the job. While you can create a hole with a hammer, why not use a drill instead when you have access to one.
The old source used complex data queries to create graphs for each pair. From request to page load completion using a pretty generous server with little to no activity: querying, formatting and receiving this data took up most of the load. Simply disabling these queries made the pages load almost instantly compared to the 2-5 seconds for smaller subsets and significantly longer for larger.
Obviously there was not much of issue in the parsing of the data as this was handled client side. (this however meant that worse off devices would end up taking a lot longer to load).
Our first port of call while we had an inkling of hope for still using this source for release was to move to server side rendering of the data. But while that was better off for the client, it certainly didn't bode well long term for the server.
This is where using the correct tool comes into play, by inserting some small queries into the relative code base we can use an time index database (graphing database) to shrink the time taken to grab and format the data down immensely.
There was a great case study I read where one company spent many hundreds of thousands of dollars on some powerful equipment to reduce their data report querying time down from a week to a weekend. Then when a consultant came in using their laptop after taking a couple of hours to import the data, managed to generate the report in a couple of minutes on their cheap laptop.
If this isn't and advert for using the correct tools I'm not sure what is!
Being a relatively large code base, the source appeared to have been maintained to a somewhat standard level of security. There was one aspect however that really bugged me.
While it was suggested that we set up internal networks to run wallets, I really did not like the idea of storing plain text passwords and host information in the main transactional database (for rpc information). This seemed off to me and potentially being a target for those whom may be familiar with the existing system.
It's safe to say that this is no longer the case, and while I will not go into detail on what has been done, using the modular system allows me to easily create wallet nodes without the transaction of this information and even better run multiple nodes of the same type without compromising security and allowing for better scalability!
Hopefully this has opened your eyes a little bit to what I have been up to and some of the larger problems we had to face. Just by implmenting small rules upon ourselves, we have managed to reduce the cost, increase scalability and hopefully (up for debate on the first testing release) increased the users potential experience while trading with us.
We are currently working on time to an internal release, after the internal release and basic testing we should be able to release a minimal feature version (Like an alpha test) to a small subset of loyal users.
Hopefully you like the content of this type of release, and if there is anything specific that you want to hear about and we are able to talk about this sort of request then please dont hesitate to get in contact through one of our communication channels (I would suggest discord).