https://cloud.google.com/pubsub/docs/ordering
Order in the final result matters
Typical Use Cases: Logs, state updates
In use cases in this category, the order in which messages are processed does not matter; all that matters is that the end result is ordered properly. For example, consider a collated log that is processed and stored to disk. The log events come from multiple publishers. In this case, the actual order in which log events are processed does not matter; all that matters is that the end result can be accessed in a time-sorted manner. Therefore, you could attach a timestamp to every event in the publisher and make the subscriber store the messages in some underlying data store (such as Cloud Datastore) that allows storage or retrieval by the sorted timestamp.
The same option works for state updates that require access to only the most recent state. For example, consider keeping track of current prices for different stocks where one does not care about history, only the most recent value. You could attach a timestamp to each stock tick and only store ones that are more recent than the currently-stored value.
Timestamp
Streaming data: timestamp 自动有
Batch data: timestamp需要手动加
The timestamp is the basis of all windowing primitives including watermarks. Triggers and lag monitoring of delayed messages.
By default, the timestamp is set during the publishing process and represents the real time at which the message is published to pub-sub, which is a system time
There are cases where you want to override system time with your own-time stamp, you can use a custom timestamp by publishing it as a pub-sub attribute.
-> then you inform Dataflow using the timestamp label setter
Windowing
Windowing subdivides a PCollection according to the timestamps of its individual elements.
Watermark
系统自动learn的,不需要设定
System’s notion of when all data in a certain window can be expected to have arrived in the pipeline. Data that arrives with a timestamp after the watermark is considered late data.
From our example, suppose we have a simple watermark that assumes approximately 30s of lag time between the data timestamps (the event time) and the time the data appears in the pipeline (the processing time), then Beam would close the first window at 5:30.
Beam’s default windowing configuration tries to determines when all data has arrived (based on the type of data source) and then advances the watermark past the end of the window. This default configuration does not allow late data.
You can allow late data by invoking the .withAllowedLateness operation when you set your PCollection’s windowing strategy.
PCollection<String> items = ...;
PCollection<String> fixedWindowedItems = items.apply(
Window.<String>into(FixedWindows.of(Duration.standardMinutes(1)))
.withAllowedLateness(Duration.standardDays(2)));
Triggers
To determine when to emit the aggregated results of each window
To handle late data
Batch data: add timestamp
1 Set window
2 (system self learn) watermark defines what is late data
3 Allow late data by invoking .withAllowedLateness operation
4 Set triggers to allow processing of late data by triggering after the event time watermark passes the end of the window.
Pub/Sub vs. Kafka
http://www.jesse-anderson.com/2016/07/apache-kafka-and-google-cloud-pubsub/
Cloud vs DIY
Pub/sub is on cloud vs. Kafka can be on-premise or in-cloud
Operations
-> Pub/Sub stores messages for seven days and can NOT configure to store more vs. Kafka can store as much data as you want (e.g. 4 - 21 days)
-> Pub/Sub automatically replicated to several regions and zones vs. Kafka requires self-replication
-> Pub/Sub has SLA uptime vs. Kafka is you purview
-> Pub/Sub has built-in authentication via Google Cloud’s IAM vs. Kafka has support for authentication (via Kerberos)
-> Pub/Sub encrypts line and at rest vs. Kafka at rest encryption is the responsibility of the user
Price
Pub/Sub is priced per million messages and for storage. publishing and consuming 10 million messages would cost $16
Architectural difference
-> Kafka gives options around delivery guarantees vs. Pub/Sub guarantees an at least once and you can’t change that programmatically
-> Both products feature massive scalability
-> Kafka does not guarantee performance (depending on configuration and partition) vs. Pub/Sub provides guaranteed performance
-> Kafka guarantees ordering in a partition vs. Pub/Sub does not have ordering guarantees.
How does Kafka guarantee order?
https://medium.com/@felipedutratine/kafka-ordering-guarantees-99320db8f87f
-> Use only one partition. Kafka preserves the order of messages within a partition.
-> set the config max.in.flight.requests.per.connection=1 to make sure that while a batch of messages is retrying, additional messages will not be sent
-> If multiple partitions: put all the messages with the same key on one partition.