Tulip: Schematizing Meta’s information platform

  • We’re sharing Tulip, a binary serialization protocol supporting schema evolution. 
  • Tulip assists with information schematization by addressing protocol reliability and different points concurrently. 
  • It replaces a number of legacy codecs utilized in Meta’s information platform and has achieved vital efficiency and effectivity beneficial properties.

There are quite a few heterogeneous providers, comparable to warehouse information storage and numerous real-time methods, that make up Meta’s information platform — all exchanging giant quantities of knowledge amongst themselves as they convey through service APIs. As we proceed to develop the variety of AI- and machine studying (ML)–associated workloads in our methods that leverage information for duties comparable to coaching ML fashions, we’re regularly working to make our information logging methods extra environment friendly.

Schematization of knowledge performs an essential function in a knowledge platform at Meta’s scale. These methods are designed with the information that each choice and trade-off can influence the reliability, efficiency, and effectivity of knowledge processing, in addition to our engineers’ developer expertise. 

Making enormous bets, like altering serialization codecs for the whole information infrastructure, is difficult within the quick time period, however gives higher long-term advantages that assist the platform evolve over time.

The problem of a knowledge platform at exabyte scale 

The information analytics logging library is current within the net tier in addition to in inside providers. It’s liable for logging analytical and operational information through Scribe (Meta’s persistent and sturdy message queuing system). Numerous providers learn and ingest information from Scribe, together with (however not restricted to) the info platform Ingestion Service, and real-time processing methods, comparable to Puma, Stylus, and XStream. The information analytics studying library correspondingly assists in deserializing information and rehydrating it right into a structured payload. Whereas this text will give attention to solely the logging library, the narrative applies to each.

Determine 1: Excessive-level system diagram for analytics-logging information circulate at Meta.

On the scale at which Meta’s information platform operates, 1000’s of engineers create, replace, and delete logging schemas each month. These logging schemas see petabytes of knowledge flowing by them day by day over Scribe.

Schematization is essential to make sure that any message logged within the current, previous, or future, relative to the model of (de)serializer, may be (de)serialized reliably at any time limit with the very best constancy and no lack of information. This property known as secure schema evolution through ahead and backward compatibility.

This text will give attention to the on-wire serialization format chosen to encode information that’s lastly processed by the info platform. We encourage the evolution of this design, the trade-offs thought of, and the ensuing enhancements. From an effectivity standpoint, the brand new encoding format wants between  40 % to 85 % fewer bytes, and makes use of 50 % to 90 % fewer CPU cycles to (de)serialize information in contrast with the beforehand used serialization codecs, specifically Hive Text Delimited and JSON serialization.

How we developed Tulip

An outline of the info analytics logging library 

The logging library is utilized by purposes written in numerous languages (comparable to Hack, C++, Java, Python, and Haskell) to serialize a payload in response to a logging schema. Engineers outline logging schemas in accordance with enterprise wants. These serialized payloads are written to Scribe for sturdy supply.

The logging library itself is available in two flavors:

  1. Code-generated: On this taste, statically typed setters for every discipline are generated for type-safe utilization. Moreover, post-processing and serialization code are additionally code-generated (the place relevant) for optimum effectivity. For instance, Hack’s thrift serializer makes use of a C++ accelerator, the place code technology is partially employed.
  2. Generic: A C++ library known as Tulib (to not be confused with Tulip) to carry out (de)serialization of dynamically typed payloads is offered. On this taste, a dynamically typed message is serialized in response to a logging schema. This mode is extra versatile than the code-generated mode as a result of it permits (de)serialization of messages with out rebuilding and redeploying the applying binary.

Legacy serialization format

The logging library writes information to a number of back-end methods which have traditionally dictated their very own serialization mechanisms. For instance, warehouse ingestion makes use of Hive Text Delimiters throughout serialization, whereas different methods use JSON serialization. There are various issues when utilizing one or each of those codecs for serializing payloads.

  1. Standardization: Beforehand, every downstream system had its personal format, and there was no standardization of serialization codecs. This elevated growth and upkeep prices.
  2. Reliability: The Hive Textual content Delimited format is positional in nature. To keep up deserialization reliability, new columns may be added solely on the finish. Any try so as to add fields in the midst of a column or delete columns will shift all of the columns after it, making the row inconceivable to deserialize (since a row shouldn’t be self-describing, not like in JSON). We distribute the up to date schema to readers in actual time.
  3. Effectivity: Each the Hive Textual content Delimited and JSON protocol are text-based and inefficient as compared with binary (de)serialization.
  4. Correctness: Textual content-based protocols comparable to Hive Textual content require escaping and unescaping of management characters discipline delimiters and line delimiters. That is carried out by each author/reader and places extra burden on library authors. It’s difficult to cope with legacy/buggy implementations that solely test for the presence of such characters and disallow the whole message as an alternative of escaping the problematic characters.
  5. Ahead and backward compatibility: It’s fascinating for shoppers to have the ability to devour payloads that have been serialized by a serialization schema each earlier than and after the model that the patron sees. The Hive Textual content Protocol doesn’t present this assure.
  6. Metadata: Hive Textual content Serialization doesn’t trivially allow the addition of metadata to the payload. Propagation of metadata for downstream methods is crucial to implement options that profit from its presence. For instance, sure debugging workflows profit from having a hostname or a checksum transferred together with the serialized payload.

The basic drawback that Tulip solved is the reliability problem, by guaranteeing a secure schema evolution format with ahead and backward compatibility throughout providers which have their very own deployment schedules. 

One might have imagined fixing the others independently by pursuing a distinct technique, however the truth that Tulip was capable of resolve all of those issues without delay made it a way more compelling funding than different choices.

Tulip serialization

The Tulip serialization protocol is a binary serialization protocol that makes use of Thrift’s TCompactProtocol for serializing a payload. It follows the identical guidelines for numbering fields with IDs as one would anticipate an engineer to make use of when updating IDs in a Thrift struct.

When engineers writer a logging schema, they specify an inventory of discipline names and kinds. Discipline IDs are usually not specified by engineers, however are as an alternative assigned by the information platform administration module.

Determine 2: Logging schema authoring circulate.

This determine exhibits user-facing workflow when an engineer creates/updates a logging schema. As soon as validation succeeds, the modifications to the logging schema are printed to numerous methods within the information platform.

The logging schema is translated right into a serialization schema and saved within the serialization schema repository. A serialization config holds lists of (discipline identify, discipline kind, discipline ID) for a corresponding logging schema in addition to the sector historical past. A transactional operation is carried out on the serialization schema when an engineer needs to replace a logging schema.

Determine 3: Tulip serialization schema evolution

The instance above exhibits the creation and updation of a logging schema and its influence on the serialization schema over time.

  1. Discipline addition: When a brand new discipline named “authors” is added to the logging schema, a brand new ID is assigned within the serialization schema.
  2. Discipline kind change: Equally, when the kind of the sector “isbn” is modified from “i64” to “string”, a brand new ID is related to the brand new discipline, however the ID of the unique “i64” typed “isbn” discipline is retained within the serialization schema. When the underlying information retailer doesn’t enable discipline kind modifications, the logging library disallows this modification.
  3. Discipline deletion: IDs are by no means faraway from the serialization schema, permitting full backward compatibility with already serialized payloads. The sector in a serialization schema for a logging schema is indelible even when fields within the logging schema are added/eliminated.
  4. Discipline rename: There’s no idea of a discipline rename, and this operation is handled as a discipline deletion adopted by a discipline addition.


We want to thank all of the members of the info platform workforce who helped make this venture a hit. With out the XFN-support of those groups and engineers at Meta, this venture wouldn’t have been attainable.

A particular thank-you to Sriguru Chakravarthi, Sushil Dhaundiyal, Hung Duong, Stefan Filip, Manski Fransazov, Alexander Gugel, Paul Harrington, Manos Karpathiotakis, Thomas Lento, Harani Mukkala, Pramod Nayak, David Pletcher, Lin Qiao, Milos Stojanovic, Ezra Stuetzel, Huseyin Tan, Bharat Vaidhyanathan, Dino Wernli, Kevin Wilfong, Chong Xie, Jingjing Zhang, and Zhenyuan Zhao.