Kafka: The Definitive Guide: Real-Time Data and Stream Processing at Scale, 2/e (Paperback)
Shapira, Gwen, Palino, Todd, Sivaram, Rajini
- 出版商: O'Reilly
- 出版日期: 2021-12-14
- 定價: $2,700
- 售價: 9.5 折 $2,565
- 貴賓價: 9.0 折 $2,430
- 語言: 英文
- 頁數: 455
- 裝訂: Quality Paper - also called trade paper
- ISBN: 1492043087
- ISBN-13: 9781492043089
-
相關分類:
Message Queue
立即出貨 (庫存=1)
買這商品的人也買了...
-
$480$379 -
$990Cassandra: The Definitive Guide 2/e
-
$1,700$1,700 -
$4,620$4,389 -
$580$452 -
$2,520Practical Time Series Analysis: Prediction with Statistics and Machine Learning (Paperback)
-
$454語音信號處理, 3/e
-
$580$458 -
$1,568Deep Learning with JavaScript: Neural Networks in Tensorflow.Js
-
$2,190$2,081 -
$1,760Design Patterns for Cloud Native Applications: Patterns in Practice Using APIs, Data, Events, and Streams
-
$1,663Continuous Architecture in Practice: Software Architecture in the Age of Agility and Devops (Paperback)
-
$505PostgreSQL 技術內幕:事務處理深度探索
-
$2,070Multithreaded JavaScript: Concurrency Beyond the Event Loop
-
$1,710Spring Start Here: Learn What You Need and Learn It Well (Paperback)
-
$2,475Software Architecture: The Hard Parts: Modern Trade-Off Analyses for Distributed Architectures (Paperback)
-
$839$797 -
$2,288Cloud Native Devops with Kubernetes: Building, Deploying, and Scaling Modern Applications in the Cloud (Paperback)
-
$214Kafka 基礎架構與設計
-
$509$479 -
$2,660$2,520 -
$1,530$1,454 -
$1,834$1,737 -
$650$507 -
$750$585
相關主題
商品描述
Every enterprise application creates data, whether it consists of log messages, metrics, user activity, or outgoing messages. Moving all this data is just as important as the data itself. With this updated edition, application architects, developers, and production engineers new to the Kafka streaming platform will learn how to handle data in motion. Additional chapters cover Kafka's AdminClient API, transactions, new security features, and tooling changes.
Engineers from Confluent and LinkedIn responsible for developing Kafka explain how to deploy production Kafka clusters, write reliable event-driven microservices, and build scalable stream processing applications with this platform. Through detailed examples, you'll learn Kafka's design principles, reliability guarantees, key APIs, and architecture details, including the replication protocol, the controller, and the storage layer.
You'll examine:
- Best practices for deploying and configuring Kafka
- Kafka producers and consumers for writing and reading messages
- Patterns and use-case requirements to ensure reliable data delivery
- Best practices for building data pipelines and applications with Kafka
- How to perform monitoring, tuning, and maintenance tasks with Kafka in production
- The most critical metrics among Kafka's operational measurements
- Kafka's delivery capabilities for stream processing systems
商品描述(中文翻譯)
每個企業應用程式都會產生數據,無論是日誌訊息、指標、使用者活動還是發出的訊息。將所有這些數據移動起來和數據本身一樣重要。這本更新版的書籍將教導應用程式架構師、開發人員和初次接觸 Kafka 流平台的生產工程師如何處理流動數據。額外的章節涵蓋了 Kafka 的 AdminClient API、交易、新的安全功能和工具變更。
Confluent 和 LinkedIn 的工程師負責開發 Kafka,他們解釋了如何部署生產 Kafka 集群、撰寫可靠的事件驅動微服務,以及如何使用這個平台建立可擴展的流處理應用程式。通過詳細的示例,您將學習 Kafka 的設計原則、可靠性保證、關鍵 API 和架構細節,包括複製協議、控制器和存儲層。
您將學習以下內容:
- 部署和配置 Kafka 的最佳實踐
- 使用 Kafka 的生產者和消費者進行訊息的寫入和讀取
- 模式和使用案例需求,以確保可靠的數據傳遞
- 使用 Kafka 建立數據管道和應用程式的最佳實踐
- 如何在生產環境中進行監控、調優和維護 Kafka
- Kafka 的操作測量中最重要的指標
- Kafka 在流處理系統中的傳遞能力
作者簡介
Gwen Shapira is a system architect at Confluent helping customers achieve success with their Apache Kafka implementation. She has 15 years of experience working with code and customers to build scalable data architectures, integrating relational and big data technologies. She currently specializes in building real-time reliable data processing pipelines using Apache Kafka. Gwen is an Oracle Ace Director, an author of Hadoop Application Architectures, and a frequent presenter at data driven conferences. Gwen is also a committer on the Apache Kafka and Apache Sqoop projects.
Todd is a Staff Site Reliability Engineer at LinkedIn, tasked with keeping the largest deployment of Apache Kafka, Zookeeper, and Samza fed and watered. He is responsible for architecture, day-to-day operations, and tools development, including the creation of an advanced monitoring and notification system. Todd is the developer of the open source project Burrow, a Kafka consumer monitoring tool, and can be found sharing his experience on Apache Kafka at industry conferences and tech talks. Todd has spent over 20 years in the technology industryrunning infrastructure services, most recently as a Systems Engineer at Verisign, developing service management automation for DNS, networking, and hardware management, as well as managing hardware and software standards across the company.
Rajini Sivaram is a Software Engineer at Confluent designing and developing security features for Kafka. She is an Apache Kafka Committer and member of the Apache Kafka Program Management Committee. Prior to joining Confluent, she was at Pivotal working on a high-performance reactive API for Kafka based on Project Reactor. Earlier, Rajini was a key developer on IBM Message Hub which provides Kafka-as-a-Service on the IBM Bluemix platform. Her experience ranges from parallel and distributed systems to Java virtual machines and messaging systems.
Krit Petty is the Site Reliability Engineering Manager for Kafka at LinkedIn. Before becoming Manager, he worked as an SRE on the team expanding and increasing Kafka to overcome the hurdles associated with scaling Kafka to never before seen heights, including taking the first steps to moving LinkedIn's large-scale Kafka deployments into Microsoft's Azure cloud. Krit has a Master's Degree in Computer Science and previously worked managing Linux systems and as a Software Engineer developing software for high-performance computing projects in the oil and gas industry.
作者簡介(中文翻譯)
Gwen Shapira是Confluent的系統架構師,協助客戶成功實施他們的Apache Kafka。她擁有15年的經驗,與客戶合作建立可擴展的數據架構,整合關聯和大數據技術。她目前專注於使用Apache Kafka構建實時可靠的數據處理流程。Gwen是Oracle Ace Director,Hadoop Application Architectures的作者,並經常在數據驅動的會議上發表演講。Gwen還是Apache Kafka和Apache Sqoop項目的貢獻者。
Todd是LinkedIn的高級網站可靠性工程師,負責維護最大的Apache Kafka、Zookeeper和Samza部署。他負責架構、日常運營和工具開發,包括創建先進的監控和通知系統。Todd是開源項目Burrow的開發者,這是一個Kafka消費者監控工具,並且在行業會議和技術講座上分享他在Apache Kafka方面的經驗。Todd在技術行業擁有超過20年的經驗,最近擔任Verisign的系統工程師,為DNS、網絡和硬件管理開發服務管理自動化,並在整個公司管理硬件和軟件標準。
Rajini Sivaram是Confluent的軟件工程師,設計和開發Kafka的安全功能。她是Apache Kafka的貢獻者,也是Apache Kafka項目管理委員會的成員。在加入Confluent之前,她在Pivotal工作,為基於Project Reactor的Kafka開發了高性能的反應式API。早些時候,Rajini是IBM Message Hub的關鍵開發人員,該平台在IBM Bluemix上提供Kafka即服務。她的經驗涵蓋了並行和分佈式系統、Java虛擬機和消息系統。
Krit Petty是LinkedIn Kafka的網站可靠性工程經理。在成為經理之前,他曾在團隊上擔任SRE,擴展和增加Kafka以克服與將LinkedIn的大規模Kafka部署移入Microsoft Azure雲相關的障礙。Krit擁有計算機科學碩士學位,之前曾在石油和天然氣行業擔任管理Linux系統和軟件工程師,開發高性能計算項目的軟件。