0 votes
in Hadoop by
Why is HDFS fault-tolerant?

1 Answer

0 votes
by

Why is HDFS fault-tolerant?

HDFS is fault-tolerant because it replicates data on different DataNodes. By default, a block of data is replicated on three DataNodes. The data blocks are stored in different DataNodes. If one node crashes, the data can still be retrieved from other DataNodes. 

hdfs-data

Offer Expires In

00 : 

HRS

50 : 

MIN

35

SEC

Related questions

0 votes
asked Jul 30, 2023 in MemCached by Robin
+1 vote
asked Nov 7, 2020 in Hadoop by SakshiSharma
...