HDFS

From HDFS: The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications.

Assumptions

  1. Hardware failure is the norm rather than the exception.

Goals

  1. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access.
  2. HDFS is tuned to support large files.
  3. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running.

NameNode and DataNode

  1. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode.
  2. The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode.

Data Replication

  1. The number of copies of a file is called the replication factor of that file. This information is stored by the NameNode.
  2. The block size and replication factor are configurable per file.
  3. All blocks in a file except the last block are the same size
  4. An application can specify the number of replicas of a file.
  5. The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode.

Racks

  1. Large HDFS instances run on a cluster of computers that commonly spread across many racks. Communication between two nodes in different racks has to go through switches.
  2. For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack.
  3. If there exists a replica on the same rack as the reader node, then that replica is preferred to satisfy the read request. If HDFS cluster spans multiple data centers, then a replica that is resident in the local data center is preferred over any remote replica.
  4. On startup, the NameNode enters a special state called Safemode. Replication of data blocks does not occur when the NameNode is in the Safemode state.
  5. A block is considered safely replicated when the minimum number of replicas of that data block has checked in with the NameNode.
  6. The NameNode keeps an image of the entire file system namespace and file Blockmap in memory.

The Persistence of File System Metadata

  1. The NameNode uses a transaction log called the EditLog to persistently record every change that occurs to file system metadata.
  2. The entire file system namespace, including the mapping of blocks to files and file system properties, is stored in a file called the FsImage. The FsImage is stored as a file in the NameNode’s local file system too.
  3. When the NameNode starts up, it reads the FsImage and EditLog from disk, applies all the transactions from the EditLog to the in-memory representation of the FsImage, and flushes out this new version into a new FsImage on disk. It can then truncate the old EditLog because its transactions have been applied to the persistent FsImage. This process is called a checkpoint.
  4. It is not optimal to create all local files in the same directory because the local file system might not be able to efficiently support a huge number of files in a single directory.

The Communication Protocols

  1. All HDFS communication protocols are layered on top of the TCP/IP protocol.
  2. By design, the NameNode never initiates any RPCs. Instead, it only responds to RPC requests issued by DataNodes or clients.

Robustness

  1. The three common types of failures are NameNode failures, DataNode failures and network partitions.
  2. The time-out to mark DataNodes dead is conservatively long (over 10 minutes by default) in order to avoid replication storm caused by state flapping of DataNodes.
  3. A scheme might automatically move data from one DataNode to another if the free space on a DataNode falls below a certain threshold.
  4. When a client creates an HDFS file, it computes a checksum of each block of the file and stores these checksums in a separate hidden file in the same HDFS namespace. When a client retrieves file contents it verifies that the data it received from each DataNode matches the checksum stored in the associated checksum file. If not, then the client can opt to retrieve that block from another DataNode that has a replica of that block.
  5. However, this degradation is acceptable because even though HDFS applications are very data intensive in nature, they are not metadata intensive. When a NameNode restarts, it selects the latest consistent FsImage and EditLog to use.

Data Organization

  1. HDFS supports write-once-read-many semantics on files. A typical block size used by HDFS is 128 MB. Thus, an HDFS file is chopped up into 128 MB chunks, and if possible, each chunk will reside on a different DataNode.
  2. Thus, the data is pipelined from one DataNode to the next.
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

  • Introduction Assumptions and GoalsHardware FailureStreami...
    a6fc544968bb阅读 3,321评论 0 2
  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi阅读 12,180评论 0 10
  • 很多时候都觉得,活着好累。 有工作的时候,愁工资太少交完房租连生活费都不够。 没有工作的时候,愁吃完这顿饭下一顿怎...
    KaLee阅读 5,145评论 6 0
  • 榆树下被拉长的影子 掩过青草 掩过黄砖 又掩过沥青路 指间两朵新桃灼灼相依 只有寂静层层浸染 金色的连翘上泛着微光...
    阿念和阿书阅读 1,634评论 2 4
  • 对于眼前的情形,凤仙儿很是意外,院子里的每个人都在认认真真的做自己手上的事,看上去是那么严阵有序、井井有条,没有一...
    兮云阅读 3,311评论 0 4

友情链接更多精彩内容