Describes the thinking behind MapR's architecture. MapR"s Hadoop achieves better reliability on commodity hardware compared to anything on the planet, including custom, proprietary hardware from other vendors. Apache HDFS and Cassandra replication is also discussed, as are SAN and NAS storage systems like Netapp and EMC.
Report
Share
Report
Share
1 of 80
Download to read offline
More Related Content
Architectural Overview of MapR's Apache Hadoop Distribution
2. 2
100% Apache Hadoop
With significant enterprise-
grade enhancements
Comprehensive
management
Industry-standard
interfaces
Higher performance
MapR Distribution for Apache Hadoop
3. 3
MapR: Lights Out Data Center Ready
• Automated stateful failover
• Automated re-replication
• Self-healing from HW and SW
failures
• Load balancing
• Rolling upgrades
• No lost jobs or data
• 99999’s of uptime
Reliable Compute Dependable Storage
• Business continuity with snapshots
and mirrors
• Recover to a point in time
• End-to-end check summing
• Strong consistency
• Built-in compression
• Mirror between two sites by RTO
policy
4. 4
MapR does MapReduce (fast)
TeraSort Record
1 TB in 54 seconds
1003 nodes
MinuteSort Record
1.5 TB in 59 seconds
2103 nodes
5. 5
MapR does MapReduce (faster)
TeraSort Record
1 TB in 54 seconds
1003 nodes
MinuteSort Record
1.5 TB in 59 seconds
2103 nodes
1.65
300
6. 6
Google chose MapR to
provide Hadoop on
Google Compute Engine
The Cloud Leaders Pick MapR
Amazon EMR is the largest
Hadoop provider in revenue
and # of clusters
Deploying OpenStack? MapR partnership with Canonical
and Mirantis on OpenStack support.
7. 7
1. Make the storage reliable
– Recover from disk and node failures
How to make a cluster reliable?
8. 8
1. Make the storage reliable
– Recover from disk and node failures
2. Make services reliable
– Services need to checkpoint their state rapidly
– Restart failed service, possibly on another node
– Move check-pointed state to restarted service, using (1) above
How to make a cluster reliable?
9. 9
1. Make the storage reliable
– Recover from disk and node failures
2. Make services reliable
– Services need to checkpoint their state rapidly
– Restart failed service, possibly on another node
– Move check-pointed state to restarted service, using (1) above
3. Do it fast
– Instant-on … (1) and (2) must happen very, very fast
– Without maintenance windows
• No compactions (eg, Cassandra, Apache HBase)
• No “anti-entropy” that periodically wipes out the cluster (eg, Cassandra)
How to make a cluster reliable?
11. 11
No NVRAM
Cannot assume special connectivity
– no separate data paths for "online" vs. replica traffic
Reliability With Commodity Hardware
12. 12
No NVRAM
Cannot assume special connectivity
– no separate data paths for "online" vs. replica traffic
Cannot even assume more than 1 drive per node
– no RAID possible
Reliability With Commodity Hardware
13. 13
No NVRAM
Cannot assume special connectivity
– no separate data paths for "online" vs. replica traffic
Cannot even assume more than 1 drive per node
– no RAID possible
Use replication, but …
– cannot assume peers have equal drive sizes
– drive on first machine is 10x larger than drive on other?
Reliability With Commodity Hardware
14. 14
No NVRAM
Cannot assume special connectivity
– no separate data paths for "online" vs. replica traffic
Cannot even assume more than 1 drive per node
– no RAID possible
Use replication, but …
– cannot assume peers have equal drive sizes
– drive on first machine is 10x larger than drive on other?
No choice but to replicate for reliability
Reliability With Commodity Hardware
15. 15
Replication is easy, right? All we have to do is send the same bits
to the master and replica.
Reliability via Replication
Clients
Primary
Server
Replica
Clients
Primary
Server
Replica
Normal replication, primary forwards Cassandra-style replication
16. 16
When the replica comes back, it is stale
– it must brought up-to-date
– until then, exposed to failure
But crashes occur…
Clients
Primary
Server
Replica
Clients
Primary
Server
Replica
Primary re-syncs replica Replica remains stale until
"anti-entropy" process
kicked off by administrator
Who
re-syncs?
17. 17
HDFS solves the problem a third way
Unless its Apache HDFS …
18. 18
HDFS solves the problem a third way
Make everything read-only
– Nothing to re-sync
Unless its Apache HDFS …
19. 19
HDFS solves the problem a third way
Make everything read-only
– Nothing to re-sync
Single writer, no reads allowed while writing
Unless its Apache HDFS …
20. 20
HDFS solves the problem a third way
Make everything read-only
– Nothing to re-sync
Single writer, no reads allowed while writing
File close is the transaction that allows readers to see data
– unclosed files are lost
– cannot write any further to closed file
Unless its Apache HDFS …
21. 21
HDFS solves the problem a third way
Make everything read-only
– Nothing to re-sync
Single writer, no reads allowed while writing
File close is the transaction that allows readers to see data
– unclosed files are lost
– cannot write any further to closed file
Real-time not possible with HDFS
– to make data visible, must close file immediately after writing
– Too many files is a serious problem with HDFS (a well documented
limitation)
Unless its Apache HDFS …
22. 22
HDFS solves the problem a third way
Make everything read-only
– Nothing to re-sync
Single writer, no reads allowed while writing
File close is the transaction that allows readers to see data
– unclosed files are lost
– cannot write any further to closed file
Real-time not possible with HDFS
– to make data visible, must close file immediately after writing
– Too many files is a serious problem with HDFS (a well documented
limitation)
HDFS therefore cannot do NFS, ever
– No “close” in NFS … can lose data any time
Unless its Apache HDFS …
23. 23
To support normal apps, need full read/write support
Let's return to issue: resync the replica when it comes back
This is the 21st century…
Clients
Primary
Server
Replica
Clients
Primary
Server
Replica
Primary re-syncs replica Replica remains stale until
"anti-entropy" process
kicked off by administrator
Who
re-syncs?
24. 24
24 TB / server
– @ 1000MB/s = 7 hours
– practical terms, @ 200MB/s = 35 hours
How long to re-sync?
25. 25
24 TB / server
– @ 1000MB/s = 7 hours
– practical terms, @ 200MB/s = 35 hours
Did you say you want to do this online?
How long to re-sync?
26. 26
24 TB / server
– @ 1000MB/s = 7 hours
– practical terms, @ 200MB/s = 35 hours
Did you say you want to do this online?
– throttle re-sync rate to 1/10th
– 350 hours to re-sync (= 15 days)
How long to re-sync?
27. 27
24 TB / server
– @ 1000MB/s = 7 hours
– practical terms, @ 200MB/s = 35 hours
Did you say you want to do this online?
– throttle re-sync rate to 1/10th
– 350 hours to re-sync (= 15 days)
What is your Mean Time To Data Loss (MTTDL)?
How long to re-sync?
28. 28
24 TB / server
– @ 1000MB/s = 7 hours
– practical terms, @ 200MB/s = 35 hours
Did you say you want to do this online?
– throttle re-sync rate to 1/10th
– 350 hours to re-sync (= 15 days)
What is your Mean Time To Data Loss (MTTDL)?
– how long before a double disk failure?
– a triple disk failure?
How long to re-sync?
29. 29
Use dual-ported disk to side-step this problem
Traditional solutions
Clients
Primary
Server
Replica
Dual Ported
Disk Array
Raid-6 with idle spares
Servers use
NVRAM
30. 30
Use dual-ported disk to side-step this problem
Traditional solutions
Clients
Primary
Server
Replica
Dual Ported
Disk Array
Raid-6 with idle spares
Servers use
NVRAM
COMMODITY HARDWARE LARGE SCALE CLUSTERING
31. 31
Use dual-ported disk to side-step this problem
Traditional solutions
Clients
Primary
Server
Replica
Dual Ported
Disk Array
Raid-6 with idle spares
Servers use
NVRAM
COMMODITY HARDWARE LARGE SCALE CLUSTERING
32. 32
Use dual-ported disk to side-step this problem
Traditional solutions
Clients
Primary
Server
Replica
Dual Ported
Disk Array
Raid-6 with idle spares
Servers use
NVRAM
COMMODITY HARDWARE LARGE SCALE CLUSTERING
Large Purchase Contracts, 5-year spare-parts plan
33. 33
Forget Performance?
SAN/NAS
data data data
data data data
daa data data
data data data
function
RDBMS
Traditional Architecture
function
App
function
App
function
App
34. 34
Forget Performance?
SAN/NAS
data data data
data data data
daa data data
data data data
function
RDBMS
Traditional Architecture
data
function
data
function
data
function
data
function
data
function
data
function
data
function
data
function
data
function
data
function
data
function
data
function
Hadoop
function
App
function
App
function
App
Geographically dispersed also?
35. 35
Chop the data on each node to 1000's of pieces
– not millions of pieces, only 1000's
– pieces are called containers
What MapR does
36. 36
Chop the data on each node to 1000's of pieces
– not millions of pieces, only 1000's
– pieces are called containers
What MapR does
37. 37
Chop the data on each node to 1000's of pieces
– not millions of pieces, only 1000's
– pieces are called containers
Spread replicas of each container across the cluster
What MapR does
38. 38
Chop the data on each node to 1000's of pieces
– not millions of pieces, only 1000's
– pieces are called containers
Spread replicas of each container across the cluster
What MapR does
40. 40
100-node cluster
each node holds 1/100th of every node's data
MapR Replication Example
41. 41
100-node cluster
each node holds 1/100th of every node's data
when a server dies
MapR Replication Example
42. 42
100-node cluster
each node holds 1/100th of every node's data
when a server dies
MapR Replication Example
43. 43
100-node cluster
each node holds 1/100th of every node's data
when a server dies
MapR Replication Example
44. 44
100-node cluster
each node holds 1/100th of every node's data
when a server dies
entire cluster resync's the dead node's data
MapR Replication Example
45. 45
100-node cluster
each node holds 1/100th of every node's data
when a server dies
entire cluster resync's the dead node's data
MapR Replication Example
47. 47
99 nodes re-sync'ing in parallel
– 99x number of drives
– 99x number of ethernet ports
– 99x cpu's
MapR Re-sync Speed
48. 48
99 nodes re-sync'ing in parallel
– 99x number of drives
– 99x number of ethernet ports
– 99x cpu's
Each is resync'ing 1/100th of the
data
MapR Re-sync Speed
49. 49
99 nodes re-sync'ing in parallel
– 99x number of drives
– 99x number of ethernet ports
– 99x cpu's
Each is resync'ing 1/100th of the
data
MapR Re-sync Speed
Net speed up is about 100x
– 350 hours vs. 3.5
50. 50
99 nodes re-sync'ing in parallel
– 99x number of drives
– 99x number of ethernet ports
– 99x cpu's
Each is resync'ing 1/100th of the
data
MapR Re-sync Speed
Net speed up is about 100x
– 350 hours vs. 3.5
MTTDL is 100xbetter
52. 52
Writes are synchronous
Data is replicated in a "chain"
fashion
– utilizes full-duplex network
Meta-data is replicated in a
"star" manner
– response time better
MapR's Read-write Replication
client1
client2
clientN
client1
client2
clientN
53. 53
As data size increases, writes
spread more, like dropping a
pebble in a pond
Larger pebbles spread the
ripples farther
Space balanced by moving idle
containers
Container Balancing
• Servers keep a bunch of containers "ready to go".
• Writes get distributed around the cluster.
54. 54
MapR Container Resync
MapR is 100% random
write
– very tough problem
On a complete crash, all
replicas diverge from
each other
On recovery, which one
should be master?
Complete
crash
55. 55
MapR Container Resync
MapR can detect exactly where
replicas diverged
– even at 2000 MB/s update rate
Resync means
– roll-back rest to divergence point
– roll-forward to converge with chosen
master
Done while online
– with very little impact on normal
operations
New master
after crash
56. 56
Resync traffic is “secondary”
Each node continuously measures RTT to all its peers
More throttle to slower peers
– Idle system runs at full speed
All automatically
MapR does Automatic Resync Throttling
58. 58
NameNode
E F
NameNode
E F
NameNode
E F
MapR's No-NameNode Architecture
HDFS Federation MapR (distributed metadata)
• Multiple single points of failure
• Limited to 50-200 million files
• Performance bottleneck
• Commercial NAS required
• HA w/ automatic failover
• Instant cluster restart
• Up to 1T files (> 5000x advantage)
• 10-20x higher performance
• 100% commodity hardware
NAS
appliance
NameNode
A B
NameNode
C D
NameNode
E F
DataNode DataNode DataNode
DataNode DataNode DataNode
A F C D E D
B C E B
C F B F
A B
A D
E
61. 61
MapR’s NFS allows Direct Deposit
Connectors not needed
No extra scripts or clusters to deploy and maintain
Random Read/Write
Compression
Distributed HA
Web
Server
…
Database
Server
Application
Server
63. 63
MapR Volumes
100K volumes are OK,
create as many as
desired!
/projects
/tahoe
/yosemite
/user
/msmith
/bjohnson
Volumes dramatically simplify the
management of Big Data
• Replication factor
• Scheduled mirroring
• Scheduled snapshots
• Data placement control
• User access and tracking
• Administrative permissions
65. 65
M7 Tables
M7 tables integrated into storage
– always available on every node, zero admin
Unlimited number of tables
– Apache HBase is typically 10-20 tables (max 100)
No compactions
Instant-On
– zero recovery time
5-10x better perf
Consistent low latency
– At 95%-ile and 99%-ile MapR M7
69. 69
ALL Hadoop components are Highly Available, eg, YARN
MapR makes Hadoop truly HA
70. 70
ALL Hadoop components are Highly Available, eg, YARN
ApplicationMaster (old JT) and TaskTracker record their state in
MapR
MapR makes Hadoop truly HA
71. 71
ALL Hadoop components are Highly Available, eg, YARN
ApplicationMaster (old JT) and TaskTracker record their state in
MapR
On node-failure, AM recovers its state from MapR
– Works even if entire cluster restarted
MapR makes Hadoop truly HA
72. 72
ALL Hadoop components are Highly Available, eg, YARN
ApplicationMaster (old JT) and TaskTracker record their state in
MapR
On node-failure, AM recovers its state from MapR
– Works even if entire cluster restarted
All jobs resume from where they were
– Only from MapR
MapR makes Hadoop truly HA
73. 73
ALL Hadoop components are Highly Available, eg, YARN
ApplicationMaster (old JT) and TaskTracker record their state in
MapR
On node-failure, AM recovers its state from MapR
– Works even if entire cluster restarted
All jobs resume from where they were
– Only from MapR
Allows pre-emption
– MapR can pre-empt any job, without losing its progress
– ExpressLane™ feature in MapR exploits it
MapR makes Hadoop truly HA
74. 74
Where/how can YOU exploit
MapR’s unique advantage?
ALL your code can easily be scale-out HA
75. 75
Where/how can YOU exploit
MapR’s unique advantage?
ALL your code can easily be scale-out HA
Save service-state in MapR
Save data in MapR
76. 76
Where/how can YOU exploit
MapR’s unique advantage?
ALL your code can easily be scale-out HA
Save service-state in MapR
Save data in MapR
Use Zookeeper to notice service failure
77. 77
Where/how can YOU exploit
MapR’s unique advantage?
ALL your code can easily be scale-out HA
Save service-state in MapR
Save data in MapR
Use Zookeeper to notice service failure
Restart anywhere, data+state will move there automatically
78. 78
Where/how can YOU exploit
MapR’s unique advantage?
ALL your code can easily be scale-out HA
Save service-state in MapR
Save data in MapR
Use Zookeeper to notice service failure
Restart anywhere, data+state will move there automatically
That’s what we did!
79. 79
Where/how can YOU exploit
MapR’s unique advantage?
ALL your code can easily be scale-out HA
Save service-state in MapR
Save data in MapR
Use Zookeeper to notice service failure
Restart anywhere, data+state will move there automatically
That’s what we did!
Only from MapR: HA for Impala, Hive, Oozie, Storm, MySQL,
SOLR/Lucene, Kafka, …
80. 80
Build cluster brick by brick, one node at a time
Use commodity hardware at rock-bottom prices
Get enterprise-class reliability: instant-restart, snapshots,
mirrors, no-single-point-of-failure, …
Export via NFS, ODBC, Hadoop and other std protocols
MapR: Unlimited Scale
# files, # tables trillions
# rows per table trillions
# data 1-10 Exabytes
# nodes 10,000+