谷歌云数据工程师考试 - Cloud Pub/Sub 复习笔记

https://cloud.google.com/pubsub/docs/ordering

Order in the final result matters

Typical Use Cases: Logs, state updates

In use cases in this category, the order in which messages are processed does not matter; all that matters is that the end result is ordered properly. For example, consider a collated log that is processed and stored to disk. The log events come from multiple publishers. In this case, the actual order in which log events are processed does not matter; all that matters is that the end result can be accessed in a time-sorted manner. Therefore, you could attach a timestamp to every event in the publisher and make the subscriber store the messages in some underlying data store (such as Cloud Datastore) that allows storage or retrieval by the sorted timestamp.

The same option works for state updates that require access to only the most recent state. For example, consider keeping track of current prices for different stocks where one does not care about history, only the most recent value. You could attach a timestamp to each stock tick and only store ones that are more recent than the currently-stored value.

Timestamp

Streaming data: timestamp 自动有
Batch data: timestamp需要手动加


Screen Shot 2018-07-10 at 11.22.44 pm.png

The timestamp is the basis of all windowing primitives including watermarks. Triggers and lag monitoring of delayed messages.

By default, the timestamp is set during the publishing process and represents the real time at which the message is published to pub-sub, which is a system time

There are cases where you want to override system time with your own-time stamp, you can use a custom timestamp by publishing it as a pub-sub attribute.

-> then you inform Dataflow using the timestamp label setter

Windowing

Windowing subdivides a PCollection according to the timestamps of its individual elements.

Watermark

系统自动learn的,不需要设定

System’s notion of when all data in a certain window can be expected to have arrived in the pipeline. Data that arrives with a timestamp after the watermark is considered late data.

From our example, suppose we have a simple watermark that assumes approximately 30s of lag time between the data timestamps (the event time) and the time the data appears in the pipeline (the processing time), then Beam would close the first window at 5:30.

Beam’s default windowing configuration tries to determines when all data has arrived (based on the type of data source) and then advances the watermark past the end of the window. This default configuration does not allow late data.

You can allow late data by invoking the .withAllowedLateness operation when you set your PCollection’s windowing strategy.

PCollection<String> items = ...;
PCollection<String> fixedWindowedItems = items.apply(
Window.<String>into(FixedWindows.of(Duration.standardMinutes(1)))
.withAllowedLateness(Duration.standardDays(2)));

Triggers

To determine when to emit the aggregated results of each window

To handle late data

Batch data: add timestamp

1 Set window
2 (system self learn) watermark defines what is late data
3 Allow late data by invoking .withAllowedLateness operation
4 Set triggers to allow processing of late data by triggering after the event time watermark passes the end of the window.

Pub/Sub vs. Kafka

http://www.jesse-anderson.com/2016/07/apache-kafka-and-google-cloud-pubsub/

Cloud vs DIY
Pub/sub is on cloud vs. Kafka can be on-premise or in-cloud

Operations
-> Pub/Sub stores messages for seven days and can NOT configure to store more vs. Kafka can store as much data as you want (e.g. 4 - 21 days)
-> Pub/Sub automatically replicated to several regions and zones vs. Kafka requires self-replication
-> Pub/Sub has SLA uptime vs. Kafka is you purview
-> Pub/Sub has built-in authentication via Google Cloud’s IAM vs. Kafka has support for authentication (via Kerberos)
-> Pub/Sub encrypts line and at rest vs. Kafka at rest encryption is the responsibility of the user

Price
Pub/Sub is priced per million messages and for storage. publishing and consuming 10 million messages would cost $16

Architectural difference
-> Kafka gives options around delivery guarantees vs. Pub/Sub guarantees an at least once and you can’t change that programmatically
-> Both products feature massive scalability
-> Kafka does not guarantee performance (depending on configuration and partition) vs. Pub/Sub provides guaranteed performance
-> Kafka guarantees ordering in a partition vs. Pub/Sub does not have ordering guarantees.

How does Kafka guarantee order?

https://medium.com/@felipedutratine/kafka-ordering-guarantees-99320db8f87f

-> Use only one partition. Kafka preserves the order of messages within a partition.
-> set the config max.in.flight.requests.per.connection=1 to make sure that while a batch of messages is retrying, additional messages will not be sent
-> If multiple partitions: put all the messages with the same key on one partition.

©著作权归作者所有,转载或内容合作请联系作者
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

推荐阅读更多精彩内容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi阅读 7,424评论 0 10
  • 01. 烦恼的根源都在自己 生气,是因为你不够大度; 郁闷,是因为你不够豁达; 焦虑,是因为你不够从容; 悲伤,是...
    馬荣軍阅读 618评论 0 0
  • 结束只是刚刚的开始 这两天有点忙,忙什么呢?忙着做几年前因为多方考虑而放弃的事。 可能我反复的说感谢行动营,你们或...
    践行者阿兰阅读 81评论 0 0
  • 国家:阶级统治的工具。爱国=爱工具;爱工具就会为工具付出,为国家好。所以,国家好了,统治阶级也就好了。由此能得出结...
    理论杀猪匠阅读 451评论 0 0
  • 已是四月中旬仲春时节了。来西安采风五天,却已下了两天的雨。雨的到来冲刷了初到西安时的炎热与干燥,反而有一种乍暖还寒...
    尤克养了一只猫阅读 376评论 0 4