HDFS官网介绍地址:http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html
Introduction
The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware(商用硬件). It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant(差异很大). HDFS is highly fault-tolerant(高度容错) and is designed to be deployed(部署) on low-cost hardware. HDFS provides high throughput(高吞吐量) access to application data and is suitable for applications that have large data sets. HDFS relaxes a few POSIX(一种设计标准) requirements to enable streaming access to (流式访问) file system data. HDFS was originally(最初) built as infrastructure(基础设施) for the Apache Nutch web search engine project.
Assumptions and Goals
Hardware Failure(硬件故障)
Hardware failure is the norm rather than the exception. An HDFS instance(实例) may consist of hundreds or thousands of server machines, each storing part of the file system’s data. The fact that there are a huge number of components and that each component has a non-trivial probability of failure means that some component of HDFS is always non-functional. Therefore, detection(发现) of faults and quick, automatic recovery from them is a core architectural goal(核心架构目标) of HDFS.
Streaming Data Access(流数据访问)
Applications that run on HDFS need streaming access to their data sets. They are not general purpose applications that typically run on general purpose file systems. HDFS is designed more for batch processing rather than interactive use(交互式使用) by users. The emphasis(重点) is on high throughput of data access rather than low latency(低延迟) of data access.
Large Data Sets(大数据集)
Applications that run on HDFS have large data sets. A typical file in HDFS is gigabytes(千兆字节) to terabytes( 兆兆字节) in size. Thus, HDFS is tuned to support large files. It should provide high aggregate(高聚合) data bandwidth(带宽) and scale to hundreds of nodes in a single cluster. It should support tens of millions of files in a single instance.
Simple Coherency Model(简单的一致性模型)
HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed except for appends and truncates. Appending the content to the end of the files is supported but cannot be updated at arbitrary point(任意点). This assumption(假设) simplifies data coherency issues(数据一致性问题) and enables high throughput data access. A MapReduce application or a web crawler application(爬虫程序) fits perfectly with this model.
“Moving Computation is Cheaper than Moving Data”(移动计算比移动数据便宜)
A computation requested by an application is much more efficient if it is executed(执行) near the data it operates on. This is especially true when the size of the data set is huge. This minimizes(最小化) network congestion(拥塞) and increases the overall throughput(整体吞吐量) of the system. The assumption is that it is often better to migrate(迁移) the computation closer to where the data is located rather than moving the data to where the application is running. HDFS provides interfaces for applications to move themselves closer to where the data is located.
Portability Across Heterogeneous Hardware and Software Platforms(跨异构硬件和软件平台的可移植性)
HDFS has been designed to be easily portable(移植) from one platform to another. This facilitates(有助于) widespread adoption of HDFS as a platform of choice for a large set of applications.
NameNode and DataNodes
HDFS has a master/slave architecture. An HDFS cluster consists of a single NameNode, a master server that manages the file system namespace and regulates access to files by clients. In addition, there are a number of DataNodes, usually one per node in the cluster, which manage storage attached to the nodes that they run on. HDFS exposes a file system namespace and allows user data to be stored in files. Internally, a file is split into one or more blocks and these blocks are stored in a set of DataNodes. The NameNode executes file system namespace operations like opening, closing, and renaming files and directories. It also determines the mapping of blocks to DataNodes. The DataNodes are responsible for serving read and write requests from the file system’s clients. The DataNodes also perform block creation, deletion, and replication upon instruction from the NameNode.
NameNode:
(1 负责客户端请求的响应(对文件系统命名空间进行操作,打开、关闭、重命名文件和目录)
(2 负责元数据(文件的名字、副本系数、Block和DataNodes的映射关系)的管理
DataNode:
(1 存储用户的文件对应的数据块(Block)
(2 定期向NameNode发送心跳信息,汇报本身及其所有的Block信息、健康状况
(3 根据NameNode的指令执行Block的创建、删除和复制

The NameNode and DataNode are pieces of software designed to run on commodity machines. These machines typically run a GNU/Linux operating system (OS). HDFS is built using the Java language; any machine that supports Java can run the NameNode or the DataNode software. Usage of the highly portable Java language means that HDFS can be deployed on a wide range of machines. A typical deployment has a dedicated(专用) machine that runs only the NameNode software. Each of the other machines in the cluster runs one instance of the DataNode software. The architecture does not preclude running multiple DataNodes on the same machine but in a real deployment that is rarely the case.
The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository(存储库) for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode.
部署
(1 NameNode + N 个DataNode 建议:NameNode和DataNode部署在不同的节点上
(2 HDFS架构可以但不建议在同一个机器上部署多个DataNode
The File System Namespace
HDFS supports a traditional hierarchical(分层) file organization. A user or an application can create directories and store files inside these directories. The file system namespace hierarchy(层次结构) is similar to most other existing file systems; one can create and remove files, move a file from one directory to another, or rename a file. HDFS supports user quotas(用户配额) and access permissions. HDFS does not support hard links or soft links. However, the HDFS architecture does not preclude implementing these features.
The NameNode maintains(维护) the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode. An application can specify(指定) the number of replicas of a file that should be maintained by HDFS. The number of copies of a file is called the replication factor of that file. This information is stored by the NameNode.
文件系统命名空间
(1 层次目录结构,支持用户配额和访问权限
(2 NameNode记录文件系统命名空间和其属性的变化,维护文件的副本数
Data Replication
HDFS is designed to reliably store very large files across machines in a large cluster. It stores each file as a sequence of blocks. The blocks of a file are replicated for fault tolerance. The block size and replication factor are configurable per file.
All blocks in a file except the last block are the same size, while users can start a new block without filling out the last block to the configured block size after the support for variable length block was added to append and hsync.
An application can specify(指定) the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later. Files in HDFS are write-once (except for appends and truncates) and have strictly one writer at any time.
The NameNode makes all decisions regarding replication of blocks. It periodically(定期) receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode.

伪分布式单节点集群
(1 需要的软件
Java™ must be installed. Recommended Java versions are described at HadoopJavaVersions //JDK环境的安装
ssh must be installed and sshd must be running to use the Hadoop scripts that manage remote Hadoop daemons. //ssh安装及免密登录的配置 ssh-keygen -t rsa cp ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys
(2 下载
地址:http://archive-primary.cloudera.com/cdh5/cdh/5/ hadoop-2.6.0-cdh5.7.0.tar.gz
(3 hadoop配置文件的修改
etc/hadoop/hadoop-env.sh:
# set to the root of your Java installation
export JAVA_HOME=/usr/java/latest
etc/hadoop/core-site.xml:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
(3 启动hdfs
格式化文件系统(仅第一次需要执行)bin/hdfs namenode -format
启动 ./sbin/start-dfs.sh
验证 jps命令 :SecondaryNameNode NameNode DataNode
(4 hdfs shell 操作
hadoop fs -(指令)