0 votes
in Hadoop by

How Hdfs is different from traditional File Systems?

1 Answer

0 votes
by
HDFS, the Hadoop Distributed File System, is responsible for storing huge data on the cluster. This is a distributed file system designed to run on commodity hardware. It has many similarities with existing distributed file systems. However, the differences from other distributed file systems are significant. HDFS is highly fault-tolerant and is designed to be deployed on low-cost hardware. HDFS provides high throughput access to application data and is suitable for applications that have large data sets. HDFS is designed to support very large files. Applications that are compatible with HDFS are those that deal with large data sets. These applications write their data only once but they read it one or more times and require these reads to be satisfied at streaming speeds. HDFS supports write-once-read-many semantics on files.

Related questions

0 votes
0 votes
asked Jun 22, 2023 in Hadoop by rajeshsharma
+2 votes
asked Dec 20, 2022 in Hadoop by john ganales
...