Apache Kafka is an open-source stream processing platform developed by the Apache Software Foundation written in Scala and Java. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. Its storage layer is essentially a "massively scalable pub/sub message queue architected as a distributed transaction log," making it highly valuable for enterprise infrastructures to process streaming data. Additionally, Kafka connects to external systems (for data import/export) via Kafka Connect and provides Kafka Streams, a Java stream processing library.
|Developer(s)||Apache Software Foundation|
|Initial release||January 2011|
0.11 / June 28, 2017
|Written in||Scala, Java|
|Type||Stream processing, Message broker|
|License||Apache License 2.0|
Apache Kafka was originally developed by LinkedIn, and was subsequently open sourced in early 2011. Graduation from the Apache Incubator occurred on 23 October 2012. In November 2014, several engineers who worked on Kafka at LinkedIn created a new company named Confluent with a focus on Kafka. According to a Quora post from 2014, Jay Kreps seems to have named it after the author Franz Kafka. Kreps chose to name the system after an author because it is "a system optimized for writing", and he liked Kafka's work.
Apache Kafka ArchitectureEdit
Kafka stores messages which come from arbitrarily many processes called "producers". The data can thereby be partitioned in different "partitions" within different "topics". Within a partition the messages are indexed and stored together with a timestamp. Other processes called "consumers" can query messages from partitions. Kafka runs on a cluster of one or more servers and the partitions can be distributed across cluster nodes.
Apache Kafka efficiently processes the real-time and streaming data when implemented along with Apache Storm, Apache HBase and Apache Spark. Deployed as a cluster on multiple servers, Kafka handles its entire publish and subscribe messaging system with the help of four APIs, namely, producer API, consumer API, streams API and connector API. Its ability to deliver massive streams of message in a fault-tolerant fashion has made it replace some of the conventional messaging systems like JMS, AMQP, etc.
The major terms of Kafka's architecture are topics, records, and brokers. Topics consist of stream of records holding different information. On the other hand, Brokers are responsible for replicating the messages. There are four major APIs in Kafka:
- Producer API - Permits the applications to publish streams of records.
- Consumer API - Permits the application to subscribe to the topics and processes the stream of records.
- Streams API – This API converts the input streams to output and produces the result.
- Connector API – Executes the reusable producer and consumer APIs that can link the topics to the existing applications.
Due to its widespread integration into enterprise-level infrastructures, monitoring Kafka performance at scale has become an increasingly important issue. Monitoring end-to-end performance requires tracking metrics from brokers, consumer, and producers, in addition to monitoring ZooKeeper which is used by Kafka for coordination among consumers. There are currently several monitoring platforms to track Kafka performance, either open-source, like LinkedIn's Burrow, or paid, like Datadog. In addition to these platforms, collecting Kafka data can also be performed using tools commonly bundled with Java, including JConsole.
Enterprises that use KafkaEdit
The following is a list of notable enterprises that have used or are using Kafka:
- "Mirror of Apache Kafka at GitHub]". github.com. Retrieved 6 March 2017.
- "Open-sourcing Kafka, LinkedIn's distributed message queue". Retrieved 27 October 2016.
- Monitoring Kafka performance metrics, Datadog Engineering Blog, accessed 23 May 2016/
- The Log: What every software engineer should know about real-time data's unifying abstraction, LinkedIn Engineering Blog, accessed 5 May 2014
- Primack, Dan. "LinkedIn engineers spin out to launch 'Kafka' startup Confluent". fortune.com. Retrieved 10 February 2015.
- "What is the relation between Kafka, the writer, and Apache Kafka, the distributed messaging system?". Quora. Retrieved 2017-06-12.
- "Monitoring Kafka performance metrics". 2016-04-06. Retrieved 2016-10-05.
- Mouzakitis, Evan (2016-04-06). "Monitoring Kafka performance metrics". datadoghq.com. Retrieved 2016-10-05.
- "Collecting Kafka performance metrics - Datadog". 2016-04-06. Retrieved 2016-10-05.
- "Exchange Market Data Streaming with Kafka".
- "OpenSOC: An Open Commitment to Security". Cisco blog. Retrieved 2016-02-03.
- "More data, more data".
- "Conviva home page". Conviva. 2017-02-28. Retrieved 2017-05-16.
- Doyung Yoon. "S2Graph : A Large-Scale Graph Database with HBase".
- "Kafka Usage in Ebay Communications Delivery Pipeline".
- "Cryptography and Protocols in Hyperledger Fabric" (PDF). January 2017. Retrieved 2017-05-05.
- "Kafka at HubSpot: Critical Consumer Metrics".
- Cheolsoo Park and Ashwin Shankar. "Netflix: Integrating Spark at Petabyte Scale".
- Boerge Svingen. "Publishing with Apache Kafka at The New York Times". Retrieved 2017-09-19.
- Shibi Sudhakaran of PayPal. "PayPal: Creating a Central Data Backbone: Couchbase Server to Kafka to Hadoop and Back (talk at Couchbase Connect 2015)". Couchbase. Retrieved 2016-02-03.
- "Shopify - Sarama is a Go library for Apache Kafka".
- "Concurrency and At Least Once Semantics with the New Kafka Consumer".
- Josh Baer. "How Apache Drives Spotify's Music Recommendations".
- Patrick Hechinger. "CTOs to Know: Meet Ticketmaster's Jody Mulkey".
- "Stream Processing in Uber". InfoQ. Retrieved 2015-12-06.
- "Apache Kafka for Item Setup". medium.com. Retrieved 2017-06-12.
- "Streaming Messages from Kafka into Redshift in near Real-Time". Yelp. Retrieved 2017-07-19.