Designing a smarter search box for a website

Research by renowned analysts is also clear – as many as 43% of web shop visitors first go to “search”, and as many as 39% of all online customers admit that “search” significantly influenced their purchase.

Even if your website is organized perfectly, customers will still get confused and some simply prefer search mechanics.  Yet, nearly all eCommerce and information websites I come across are not even good enough to be usable. Website search boxes are non-trivial.  

It goes without saying that Google is the gold standard of search.  They are the pioneers of AI and nearly all major advances in the science of search.  My recommended foundational education in this area is “Attention is all you need” – a revolutionary paper released in 2017 by Google Brain engineers, who presented a model able to perform machine translation automatically. As the name says, the main feature of the model that has been introduced is an “attention block”, in charge of identifying which part of the sentence should be in focus. As well known, words have different meanings depending on the context – thus we needed a model able to understand the meaning of the specific word in the specific context, in order to perform more precise translations.

The key here is – context and understanding.  Your search box needs to understand meaning in the same way google translate understands meaning.  This is why the model is so valuable. Google’s model architecture is called a Transformer and, as well as its derivatives, is able to understand different languages, context and intention and therefore work for in-website search.. They are able to perform a set of different tasks: 

  • question and answer modeling / textual search
  • tagging parts of speech
  • recognizing named entities
  • similarity matching
  • text summarization
  • text generalization

This was a turning point in the world of machine learning, when the golden age of natural language processing started. Relying on this cutting edge technology, and being driven by the eCommerce expansion, where having a powerful search engine was one of the main components of a successful platform, we have developed our own Transformer derivative, in order to use it as the background intelligence of the search engine. 

We have played with a range of different Transformer derivatives – BERT, BERT(ic) specialized for cyrilic, RoBERTa mini, distilbert, and paraphrase, which has given the best results in our case.

There were a lot of challenges when applying these models. Each one of them had its advantages and disadvantages. The acceptance criteria was also challenging – maintain the accuracy and return matches in milliseconds. We first started with big models and a lot of features to be used, running on CPU, where the model was running for days, and search lasting for a couple of seconds for some queries. 

Through the iterations, we were testing and improving the approach and the underlying models used over the time. In our journey, the commercial solution includes several improvements from the baseline version:

  • we adjusted the implementation in order to work on a GPU
  • we defined a configuration which allows us to dynamically select which features to use in the model and which features to boost (give priority when searching)
  • we combined two types of searching in the engine – ML model search and “exact match” search 

When building a powerful search engine, there are a lot of business requirements to be satisfied (for example, return an adequate match on a bookstore site, when typing “a big book having blue covers”). We sat the following features in the main focus:

  • typos tolerant search – tolerating typing and spelling mistakes and showing relevant results
  • recognizing synonyms – understanding the use synonyms, and giving options to define more synonyms for the specific industry
  • recognizing context – understanding the meaning even if verbs and adjectives are used in different grammatical forms
  • configurable training options – fine tuning the model, boosting some specific features or defining stop words
  • multilingual search – understanding any language being typed in
  • letter agnostic search – supporting both alphabet and cyrillic

What do these functionalities mean in practice? They make the engine “smart” – it should work perfectly even though no perfect search terms are provided (or incomplete information in the query is provided), catch synonyms, knowing that specific attributes are more important than others, and proposing the most adequate alternatives when no match is found.

Here is an example of how this might work in practice

The high-level architecture is depicted below.



Photo credits: