Help the world stop coronavirus! Stay home!

Prev Next

BigData / Hadoop basics

Explain HDFS Read a file workflow.

Client opens the file it wishes to read by calling open() on the Distributed FileSystem (HDFS).

DistributedFileSystem makes an RPC call to the name node to determine the locations of the blocks for the first few blocks in the file.

For each block, the name node returns the addresses of the data nodes that have a copy of that block and data nodes are sorted according to their proximity to the client.

DistributedFileSystem returns an FSDataInputStream to the client for it to read data from. FSDataInputStream in turns wraps the DFSInputStream which manages the data node and name node I/O.

Client calls read() on the stream. DFSInputStream which has stored the data node addresses then connects to the closest data node for the first block in the file.

Data is streamed from the data node back to the client, which calls read() repeatedly on the stream. When the end of the block is reached.

DFSInputStream will close the connection to the data node and then finds the best data node for the next block.

❤Cash Back At Stores you Love !!!❤

Earn your $10 reward when you make your first purchase through Ebates by signing up with clicking below button.

Ebates Coupons and Cash Back

More Related questions...

Show more question and Answers...

Hadoop MapReduce

Comments & Discussions