menu
arrow_back
뒤로

Creating a Streaming Data Pipeline With Apache Kafka

—/100

Checkpoints

arrow_forward

Creating a Apache Kafka deployment manager

Configure the Kafka VM instance

Create topics in Kafka

Process the input data with Kafka Streams

Creating a Streaming Data Pipeline With Apache Kafka

45분 크레딧 5개

This lab was developed with our partner, Confluent. Your personal information may be shared with Confluent, the lab sponsor, if you have opted-in to receive product updates, announcements, and offers in your Account Profile.

GSP 730

Google Cloud Self-Paced Labs

Overview

In this lab, you create a streaming data pipeline with Kafka providing you a hands-on look at the Kafka Streams API. You will run a Java application that uses the Kafka Streams library by showcasing a simple end-to-end data pipeline powered by Apache Kafka®.

Objectives

In this lab, you will:

  • Start a Kafka cluster on a Compute Engine single machine

  • Write example input data to a Kafka topic, using the console producer included in Kafka

  • Process the input data with a Java application called WordCount that uses the Kafka Streams library* Inspect the output data of the application, using the console consumer included in Kafka

이 실습의 나머지 부분과 기타 사항에 대해 알아보려면 Qwiklabs에 가입하세요.

  • Google Cloud Console에 대한 임시 액세스 권한을 얻습니다.
  • 초급부터 고급 수준까지 200여 개의 실습이 준비되어 있습니다.
  • 자신의 학습 속도에 맞춰 학습할 수 있도록 적은 분량으로 나누어져 있습니다.
이 실습을 시작하려면 가입하세요