Suggest API – Slack Engineering

Slack, as a product, presents many alternatives for suggestion, the place we are able to make ideas to simplify the person expertise and make it extra pleasant. Every one looks like a terrific use case for machine studying, but it surely isn’t reasonable for us to create a bespoke resolution for every.

As an alternative, we developed a unified framework we name the Suggest API, which permits us to rapidly bootstrap new suggestion use circumstances behind an API which is well accessible to engineers at Slack. Behind the scenes, these recommenders reuse a standard set of infrastructure for each a part of the advice engine, reminiscent of knowledge processing, mannequin coaching, candidate era, and monitoring. This has allowed us to ship various completely different suggestion fashions throughout the product, driving improved buyer expertise in a wide range of contexts.

Extra than simply ML fashions

We purpose to deploy and keep ML fashions in manufacturing reliably and effectively, which is termed MLOps. This represents nearly all of our staff’s work, whereas mannequin coaching is a comparatively small piece within the puzzle. When you try Matt Turck’s 2021 review speaking concerning the ML and knowledge panorama, you’ll see for every section of MLOps, there are extra instruments than you possibly can select from in the marketplace, which is a sign that trade requirements are nonetheless creating within the space. As a matter of reality, experiences present a majority (as much as 88%) of company AI initiatives are struggling to maneuver past check levels. Corporations reminiscent of Facebook, Netflix, Uber normally implement their very own in-house techniques, which has similarities right here in Slack.

Luckily, most frequently we don’t must hassle with choosing the proper device, due to Slack’s well-maintained knowledge warehouse ecosystem that enables us to:

  • Schedule varied duties to course of sequentially in Airflow
  • Ingest and course of knowledge from the database, in addition to logs from servers, purchasers, and job queues
  • Question knowledge and create dashboards to visualise and observe knowledge
  • Lookup computed knowledge from the Function Retailer with major keys
  • Run experiments to facilitate characteristic launching with the A/B testing framework

Different groups in Slack, reminiscent of Cloud Companies, Cloud Foundations, and Monitoring, have offered us with further infrastructure and tooling which can be vital to construct the Suggest API.

The place is ML utilized in Slack?

The ML providers staff at Slack companions with different groups throughout the product to ship impactful and pleasant product modifications, wherever incorporating machine studying is smart. What this seems like in follow is various completely different quality-of-life enhancements, the place machine studying smoothes tough edges of the product, simplifying person workflows and expertise. Listed here are among the areas the place you’ll find this kind of suggestion expertise within the product:

An necessary results of the Suggest API — even apart from the present use circumstances that may be discovered within the product — are the close to equal variety of use circumstances we’re at the moment testing internally, or we’ve tried and deserted. With easy instruments to bootstrap new recommenders, we’ve empowered product groups to observe a core product design precept at Slack of “prototyping the trail”, testing and discovering the place machine studying is smart in our product.

Machine studying, when constructed up from nothing, can require heavy funding and might be fairly hit-or-miss, so beforehand we averted making an attempt out many use circumstances which may have made sense merely out of a worry of failure. Now, we’ve seen a proliferation of ML prototypes by eradicating that up-front value, and are netting out extra use circumstances for machine studying and suggestion from it.

Unified ML workflow throughout product

With such a wide range of use circumstances of advice fashions, we’ve to be deliberate about how we manage and take into consideration the varied parts. At a excessive stage, recommenders are categorized in line with “corpus”, after which “supply”. A corpus is a kind of entity — e.g. a Slack channel or person — and a supply represents a specific a part of the Slack product. A corpus can correspond to a number of sources — e.g. Slackbot channel ideas and Channel browser suggestion every correspond to a definite supply, however the identical corpus channel.

No matter corpus and use case although, the therapy of every suggestion request is fairly comparable. At a excessive stage:

  • Our most important backend serves the request, taking in a question, corpus, and supply and returning a set of suggestions that we additionally log.
  • When these outcomes are interacted with in our consumer’s frontend, we log these interactions.
  • Offline, in our knowledge warehouse (Airflow), we mix these logs into coaching knowledge to coach new fashions, that are subsequently served to our backend as a part of returning suggestions.

Here’s what that workflow seems like in whole:workflow


Every supply is related to a “recommender” the place we implement a sequence of steps to generate an inventory of suggestions, which:

  • Fetch related candidates from varied sources, together with our embeddings service the place comparable entities have shut vector representations
  • Filter candidates based mostly on relevancy, possession, or visibility, e.g. personal channels
  • Increase options reminiscent of entities’ attributes and actions utilizing our Function Retailer
  • Rating and type the candidates with the predictions generated by corresponding ML fashions
  • Rerank candidates based mostly on further guidelines

Every of those steps is constructed as a standardized class which is reusable between recommenders, and every of those recommenders is in flip constructed as a sequence of those steps. Whereas use circumstances may require bespoke new parts for these steps, typically creating a brand new recommender from present parts is so simple as writing one thing like this:

closing class RecommenderChannel extends Recommender 
   public perform __construct() 
	   mother or father::__construct(
		   /* fetchers */ vec[new RecommendChannelFetcher()],
		   /* filters  */ vec[new RecommendChannelFilterPrivate()],
		   /* mannequin    */ new RecommendLinearModel(
			   /** further options to extract **/
		   /* reranker */ vec[new RecommendChannelReranker()],

Knowledge processing pipelines

In addition to having the ability to serve suggestions based mostly on these feedback, our base recommender additionally handles important logging, reminiscent of monitoring the preliminary request that was made to the API, the outcomes returned from it, and the options our machine studying mannequin used at scoring time. We then output the outcomes by means of the Suggest API to the frontend the place person responses, reminiscent of clicks, are additionally logged.

With that, we schedule Airflow duties to affix logs from backend (server) offering options, and frontend (consumer) offering responses to generate the coaching knowledge for machine

Mannequin coaching pipelines

Fashions are then scheduled to be educated in Airflow by operating Kubernetes Jobs and served on Kubernetes Clusters. With that we rating the candidates and full the cycle, thereafter beginning a brand new cycle of logging, coaching, and serving once more.

For every supply we regularly experiment with varied fashions, reminiscent of Logistic Regression and XGBoost. We set issues up to ensure it’s easy so as to add and productionize new fashions. Within the following, you possibly can see the six fashions in whole we’re experimenting with for people-browser because the supply and the quantity of Python code wanted to coach the XGBoost rating mannequin.train

					"goal": "rank:map",
					"n_estimators": 500,


We additionally output metrics in several parts in order that we are able to get an total image on how the fashions are performing. When a brand new mannequin is productionized, the metrics can be routinely up to date to trace its efficiency.

  • Reliability metrics: Prometheus metrics from the backend to trace the variety of requests and errors
  • Effectivity metrics: Prometheus metrics from the mannequin serving service, reminiscent of throughput and latency, to ensure we’re responding quick sufficient to all of the requests
  • On-line metrics: enterprise metrics which we share with exterior stakeholders. Some most necessary metrics we observe are the clickthrough price (CTR), and rating metrics reminiscent of discounted cumulative gain (DCG). On-line metrics are steadily checked and monitored to ensure the mannequin, plus the general end-to-end course of, is working correctly in manufacturing
  • Offline metrics: metrics to check varied fashions throughout coaching time and determine which one we doubtlessly need to experiment and productionize. We put aside the validation knowledge, other than the coaching knowledge, in order that we all know the mannequin can carry out nicely on knowledge it hasn’t seen but. We observe frequent classification and rating metrics for each coaching and validation knowledge
  • Function stats: metrics to observe characteristic distribution and have significance, upon which we run anomaly detection to stop distribution shift

Iteration and experimentation

So as to practice a mannequin, we’d like knowledge, each options and responses. Most frequently, our work will goal energetic Slack customers so we normally have options to work with. Nonetheless, with out the mannequin, we gained’t be capable to generate suggestions for customers to work together with a view to get the responses. That is one variant of the cold start drawback which is prevalent in constructing suggestion engines, and it’s the place our hand-tuned mannequin comes into play.

Through the first iteration we’ll typically depend on a hand-tuned mannequin which relies on frequent data and easy heuristics, e.g. for send-time optimization, we usually tend to ship invite reminders when the staff or inviter is extra energetic. On the identical time, we brainstorm related options and start extracting from the Function Retailer and logging them. This may give us the primary batch of coaching knowledge to iteratively enhance upon.

We depend on in depth A/B testings to ensure ML fashions are doing their job to enhance the advice high quality. At any time when we swap from hand-tuned to mannequin based mostly suggestions, or experiment with completely different units of options or extra sophisticated fashions, we run experiments and ensure the change is boosting the important thing enterprise metrics. We’ll typically be taking a look at metrics reminiscent of CTR, profitable groups, or different metrics associated to particular elements of Slack.hand tuned

Following is an inventory of latest wins we’ve made to the aforementioned ML powered options, measured in CTR.

  • Composer DMs: +38.86% when migrating from hand-tuned mannequin to logistic regression and extra not too long ago +5.70% with XGBoost classification mannequin and extra characteristic set
  • Creator invite movement: +15.31% when migrating from hand-tuned mannequin to logistic regression
  • Slackbot channel ideas: +123.57% for depart and +31.92% for archive ideas when migrating from hand-tuned mannequin to XGBoost classification mannequin
  • Channel browser suggestion: +14.76% when migrating from hand-tuned mannequin to XGBoost classification mannequin. Under we are able to see the influence of the channel browser experiment over the time:trend

Remaining ideas

The Suggest API has been used to serve ML fashions during the last couple of years, although it took for much longer to construct the groundwork of assorted providers backing up the infrastructure. The unified strategy of Suggest API makes it doable to quickly prototype and productionize ML fashions throughout the product. In the meantime, we’re continually improving:

  • Knowledge logging and preprocessing course of, so that may be prolonged to extra use circumstances
  • Mannequin coaching infrastructure, e.g. scaling, {hardware} acceleration, and debuggability
  • Mannequin explainability and mannequin introspection tooling using SHAP

We’re additionally reaching out to numerous groups throughout the Slack group for extra alternatives to collaborate on new elements of the product that could possibly be improved with ML.


We wished to provide a shout out to all of the folks that have contributed to this journey: Fiona Condon, Xander Johnson, Kyle Jablon

Keen on taking up fascinating initiatives, making folks’s work lives simpler, or simply constructing some fairly cool kinds? We’re hiring! 💼 Apply now