Intro and motivation
In the beginning, it started as the three separate projects. The first one for managing machine learning models known as MoMa, the second one for forecasting using multiple models to give the best results (Fibi), and the third one was a business solution for segmentation and recommender system with personalized view and basic campaigning functionality aka Coeus.
As time went on and more business cases came our way we started to expend Coeus with the new functionality. It continued with it needing the functionality of the other two solutions so merged them all.
Since it was too hard to pronounce the name and not all clients needed every feature it had – that was the trigger for us to rethink and rebrand our product, along with the decision that it should be a full-on modular platform that can suit every client. Therefore, Solver AI Suite was born.
Modularity and Pillars
In the beginning, we focused on creating the base of our platform which enabled us to put building blocks on top of it which were made of lots and lots of independent machine learning models and services.
Every instance of the platform needs to have the base of the platform which we call the Solver Foundation that contains its API for integration tools and communication with the building blocks on top of it.
Modules can be deployed independently, and clients choose which module they want to purchase. Although to make things easier for our clients to match wanted functionality we created a set of modules as suggestions.
On top of every set of modules, we added another layer of API to make all the data easier to reach for business users and full solutions integration. And it’s called Business API
Graphical user interface comes on top of every Businesses API to create full pillars of functionality for solving different Business Cases.
Note that both Business API and Apps are made with modularity in mind so clients don’t need to use our suggestions at all.
Business Use Cases
After presenting our idea to old and new clients positive feedback came our way, and things that the platform didn’t solve at the time were added to our roadmap by priority which we use to expand it to this day.
Some of our pillars are:
- Solver AI Studio – Enables training, evaluating, and pushing ML models to production. It also tracks the models versioning for the client.
- Forecast Studio – Lets customer run forecast models on any dataset that they add
- Solver Smart Segmentation – Enables users to have a complete overview of all segmentation models for all customers
- Campaigning – Puts the power of all ML models and communication channel integrations into platform users hands so that they can target a specific set of customers with ease
- Solver Virtual Buyer – Enables users to meet their unknown customers needs and better understand their needs
- Solver Personalize – All the data about each customer in one place
- Solver Anomaly Detection – Users can know all about increased sales and hard falls right when they happen so that they can understand them better
- Solver Process Miner – Helps you find out how your customers go through defined processes when interacting with your company
- Solver Power Leads – Track customer behavior when they search for their wanted products
- Solver Product Analysis – Analyze your products to improve your offer
- Reporting – See the report of how your team uses the Platform and generate different reports
Architecture and Security
Every module in the Solver AI Suite is created as a micro-service being deployed to Kubernetes clusters and wrapped with Istio service mesh. Using that combination let’s us deploy our platform to every public cloud that our clients decide to use.
Istio forces mTLS communication between services using sidecar containers which requires two ways authentication for data to transfer between them.
On top of that sits Istio Gateway that enables us to automatically expose newly deployed services to our clients.
CI/CD process is implemented by the book – Github Actions that run tests after pushing the image to the registry has saved us a lot of time doing the same old boring things manually.
On the service level we use gRPC for internal communication and binary data serialization which makes internal data transfers more than 10x faster than using RestAPI and since it is binary that needs a .proto file to be decoded it adds another level of security. External communication is done with FastAPI and if a database is needed every service has its own for security reasons also.
To be able to integrate with different customers across different industries we needed to create our data models that can feed ML algorithms with the data that it needs.
And since we use micro-services with modern technologies all our integration work goes to this part. We make sure that the data is relevant, clean, and usable before we start feeding it to the ml models.
Internally we strongly believe in API first approach since that is the part of the platform that both our developers and clients use to communicate with it. Every module is designed in advance using the Open API 3.0 standard that will meet the demands of every system that it needs to integrate to.
Along with that, we noticed two types of API users that will consume the data: deep integration users and business users. As their names suggest they will have a different approach to consuming, the first ones want to go as low as possible and will probably want to tweak everything to suit their needs and the second ones want to come to the relevant information as fast as possible.
So to satisfy them both we created Foundation API which is used for communication on the base and module level and Business API that will call several modules at the same time to extract relevant information for each business case.
Same as other building blocks on the platform GUI is made of several Apps that are joined together by the central hub called Solver Portal.
Every App created as a set of widgets with a single purpose that matches one Business API call. We also enable our users to rearrange them as they see fit and we store settings for each user so that they can be more productive every time when they use the app. On the left side, there is a familiar section with tabs and main settings to make users feel at home when switching from one app to the other.
The graphical user interface follows material design with our internally made touches to differentiate it from loads of web applications in today’s market.
Along with Apps users can use monitoring modules, settings, API documentation.
Along with expanding our catalog of business solutions using ml models and neural networks we see the opportunity to split GUI using micro-frontend architecture which will enable users to move widgets across different apps which will give our customers full control to create the single best overview of their business solution. With that comes the option to let them integrate single widgets to their existing software sets.
There is no one size fits all in the business world the same as in everything else, so our approach has helped us save the time and money customizing solutions on a per client basis. In the next text, I will go deeper on how we serve ML models and how we manage their lifecycle.