SlideShare a Scribd company logo
Why your Spark job is failing
● Data science at Cloudera 
● Recently lead Apache Spark development at 
Cloudera 
● Before that, committing on Apache YARN 
and MapReduce 
● Hadoop project management committee
com.esotericsoftware.kryo. 
KryoException: Unable to find class: 
$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC 
$$iwC$$iwC$$anonfun$4$$anonfun$apply$3
Why your Spark job is failing
Why your Spark job is failing
Why your Spark job is failing
org.apache.spark.SparkException: Job aborted due to stage failure: Task 
0.0:0 failed 4 times, most recent failure: Exception failure in TID 6 on 
host bottou02-10g.pa.cloudera.com: java.lang.ArithmeticException: / by 
zero 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply$mcII$sp(<console>:13) 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) 
scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1016) 
[...] 
Driver stacktrace: 
at org.apache.spark.scheduler.DAGScheduler. 
org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages 
(DAGScheduler.scala:1033) 
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1. 
apply(DAGScheduler.scala:1017) 
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1. 
apply(DAGScheduler.scala:1015) 
[...]
org.apache.spark.SparkException: Job aborted due to stage failure: Task 
0.0:0 failed 4 times, most recent failure: Exception failure in TID 6 on 
host bottou02-10g.pa.cloudera.com: java.lang.ArithmeticException: / by 
zero 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply$mcII$sp(<console>:13) 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) 
scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1016) 
[...]
org.apache.spark.SparkException: Job aborted due to stage failure: Task 
0.0:0 failed 4 times, most recent failure: Exception failure in TID 6 on 
host bottou02-10g.pa.cloudera.com: java.lang.ArithmeticException: / by 
zero 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply$mcII$sp(<console>:13) 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) 
scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1016) 
[...]
val file = sc.textFile("hdfs://...") 
file.filter(_.startsWith(“banana”)) 
.count()
Job 
Stage 
Task Task 
Task Task 
Stage 
Task Task 
Task Task
Why your Spark job is failing
val rdd1 = sc.textFile(“hdfs://...”) 
.map(someFunc) 
.filter(filterFunc) 
textFile map 
filter
val rdd2 = sc.hadoopFile(“hdfs: 
//...”) 
.groupByKey() 
.map(someOtherFunc) 
hadoopFile groupByKey map
val rdd3 = rdd1.join(rdd2) 
.map(someFunc) 
join map
rdd3.collect()
textFile map filter 
hadoop 
group 
File ByKey 
map 
join map
textFile map filter 
hadoop 
group 
File ByKey 
map 
join map
Stage 
Task Task 
Task Task
Stage 
Task Task 
Task Task
Stage 
Task Task 
Task Task
org.apache.spark.SparkException: Job aborted due to stage failure: Task 
0.0:0 failed 4 times, most recent failure: Exception failure in TID 6 on 
host bottou02-10g.pa.cloudera.com: java.lang.ArithmeticException: / by 
zero 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply$mcII$sp(<console>:13) 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) 
$iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) 
scala.collection.Iterator$$anon$11.next(Iterator.scala:328) 
org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1016) 
[...]
14/04/22 11:59:58 ERROR executor.Executor: Exception in task ID 286 6 
java.io.IOException: Filesystem closed 
at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:565 ) 
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:64 8) 
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:706 ) 
at java.io.DataInputStream.read(DataInputStream.java:100 ) 
at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:20 9) 
at org.apache.hadoop.util.LineReader.readLine(LineReader.java:173 ) 
at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:20 6) 
at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:4 5) 
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:164 ) 
at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:149 ) 
at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71 ) 
at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:2 7) 
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327 ) 
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388 ) 
at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388 ) 
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327 ) 
at scala.collection.Iterator$class.foreach(Iterator.scala:727 ) 
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157 ) 
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:16 1) 
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:10 2) 
at org.apache.spark.scheduler.Task.run(Task.scala:53 ) 
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:21 1) 
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:4 2) 
at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:4 1) 
at java.security.AccessController.doPrivileged(Native Method ) 
at javax.security.auth.Subject.doAs(Subject.java:415 ) 
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:140 8) 
at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:4 1) 
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176 ) 
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:114 5) 
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:61 5) 
at java.lang.Thread.run(Thread.java:724)
Why your Spark job is failing
ResourceManager 
NodeManager NodeManager 
Container Container 
Application 
Master 
Container 
Client
ResourceManager 
Client 
NodeManager NodeManager 
Container 
Map Task 
Container 
Application 
Master 
Container 
Reduce Task
ResourceManager 
Client 
NodeManager NodeManager 
Container 
Map Task 
Container 
Application 
Master 
Container 
Reduce Task
ResourceManager 
Client 
NodeManager NodeManager 
Container 
Map Task 
Container 
Application 
Master 
Container 
Reduce Task
Container [pid=63375, 
containerID=container_1388158490598_0001_01_00 
0003] is running beyond physical memory 
limits. Current usage: 2.1 GB of 2 GB physical 
memory used; 2.8 GB of 4.2 GB virtual memory 
used. Killing container.
yarn.nodemanager.resource.memory-mb 
Executor container 
spark.yarn.executor.memoryOverhead 
spark.executor.memory 
spark.shuffle.memoryFraction 
spark.storage.memoryFraction
Why your Spark job is failing
Why your Spark job is failing
Why your Spark job is failing
Why your Spark job is failing
Why your Spark job is failing
ExternalAppend 
OnlyMap 
Block 
Block 
deserialize 
deserialize
ExternalAppend 
OnlyMap 
key1 -> values 
key2 -> values 
key3 -> values
ExternalAppend 
OnlyMap 
key1 -> values 
key2 -> values 
key3 -> values
ExternalAppend 
OnlyMap 
Sort & Spill 
key1 -> values 
key2 -> values 
key3 -> values
rdd.reduceByKey(reduceFunc, 
numPartitions=1000)
Why your Spark job is failing
java.io.FileNotFoundException: 
/dn6/spark/local/spark-local- 
20140610134115- 
2cee/30/merged_shuffle_0_368_14 (Too many 
open files)
Task 
Task 
Write stuff 
key1 -> values 
key2 -> values 
key3 -> values 
out 
key1 -> values 
key2 -> values 
key3 -> values Task
Task 
Task 
Write stuff 
key1 -> values 
key2 -> values 
key3 -> values 
out 
key1 -> values 
key2 -> values 
key3 -> values Task
Partition 1 
File 
Partition 2 
File 
Partition 3 
File 
Records
Records 
Buffer
Single file 
Buffer 
Sort & Spill 
Partition 1 
Records 
Partition 2 
Records 
Partition 3 
Records 
Index file
conf.set(“spark.shuffle.manager”, 
SORT)
● No 
● Distributed systems are complicated
Why your Spark job is failing

More Related Content

Why your Spark job is failing

  • 2. ● Data science at Cloudera ● Recently lead Apache Spark development at Cloudera ● Before that, committing on Apache YARN and MapReduce ● Hadoop project management committee
  • 3. com.esotericsoftware.kryo. KryoException: Unable to find class: $iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC $$iwC$$iwC$$anonfun$4$$anonfun$apply$3
  • 7. org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0:0 failed 4 times, most recent failure: Exception failure in TID 6 on host bottou02-10g.pa.cloudera.com: java.lang.ArithmeticException: / by zero $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply$mcII$sp(<console>:13) $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) scala.collection.Iterator$$anon$11.next(Iterator.scala:328) org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1016) [...] Driver stacktrace: at org.apache.spark.scheduler.DAGScheduler. org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages (DAGScheduler.scala:1033) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1. apply(DAGScheduler.scala:1017) at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1. apply(DAGScheduler.scala:1015) [...]
  • 8. org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0:0 failed 4 times, most recent failure: Exception failure in TID 6 on host bottou02-10g.pa.cloudera.com: java.lang.ArithmeticException: / by zero $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply$mcII$sp(<console>:13) $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) scala.collection.Iterator$$anon$11.next(Iterator.scala:328) org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1016) [...]
  • 9. org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0:0 failed 4 times, most recent failure: Exception failure in TID 6 on host bottou02-10g.pa.cloudera.com: java.lang.ArithmeticException: / by zero $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply$mcII$sp(<console>:13) $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) scala.collection.Iterator$$anon$11.next(Iterator.scala:328) org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1016) [...]
  • 10. val file = sc.textFile("hdfs://...") file.filter(_.startsWith(“banana”)) .count()
  • 11. Job Stage Task Task Task Task Stage Task Task Task Task
  • 13. val rdd1 = sc.textFile(“hdfs://...”) .map(someFunc) .filter(filterFunc) textFile map filter
  • 14. val rdd2 = sc.hadoopFile(“hdfs: //...”) .groupByKey() .map(someOtherFunc) hadoopFile groupByKey map
  • 15. val rdd3 = rdd1.join(rdd2) .map(someFunc) join map
  • 17. textFile map filter hadoop group File ByKey map join map
  • 18. textFile map filter hadoop group File ByKey map join map
  • 19. Stage Task Task Task Task
  • 20. Stage Task Task Task Task
  • 21. Stage Task Task Task Task
  • 22. org.apache.spark.SparkException: Job aborted due to stage failure: Task 0.0:0 failed 4 times, most recent failure: Exception failure in TID 6 on host bottou02-10g.pa.cloudera.com: java.lang.ArithmeticException: / by zero $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply$mcII$sp(<console>:13) $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) $iwC$$iwC$$iwC$$iwC$$anonfun$1.apply(<console>:13) scala.collection.Iterator$$anon$11.next(Iterator.scala:328) org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1016) [...]
  • 23. 14/04/22 11:59:58 ERROR executor.Executor: Exception in task ID 286 6 java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:565 ) at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:64 8) at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:706 ) at java.io.DataInputStream.read(DataInputStream.java:100 ) at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:20 9) at org.apache.hadoop.util.LineReader.readLine(LineReader.java:173 ) at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:20 6) at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:4 5) at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:164 ) at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:149 ) at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71 ) at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:2 7) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327 ) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388 ) at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388 ) at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327 ) at scala.collection.Iterator$class.foreach(Iterator.scala:727 ) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157 ) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:16 1) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:10 2) at org.apache.spark.scheduler.Task.run(Task.scala:53 ) at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:21 1) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:4 2) at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:4 1) at java.security.AccessController.doPrivileged(Native Method ) at javax.security.auth.Subject.doAs(Subject.java:415 ) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:140 8) at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:4 1) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176 ) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:114 5) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:61 5) at java.lang.Thread.run(Thread.java:724)
  • 25. ResourceManager NodeManager NodeManager Container Container Application Master Container Client
  • 26. ResourceManager Client NodeManager NodeManager Container Map Task Container Application Master Container Reduce Task
  • 27. ResourceManager Client NodeManager NodeManager Container Map Task Container Application Master Container Reduce Task
  • 28. ResourceManager Client NodeManager NodeManager Container Map Task Container Application Master Container Reduce Task
  • 29. Container [pid=63375, containerID=container_1388158490598_0001_01_00 0003] is running beyond physical memory limits. Current usage: 2.1 GB of 2 GB physical memory used; 2.8 GB of 4.2 GB virtual memory used. Killing container.
  • 30. yarn.nodemanager.resource.memory-mb Executor container spark.yarn.executor.memoryOverhead spark.executor.memory spark.shuffle.memoryFraction spark.storage.memoryFraction
  • 36. ExternalAppend OnlyMap Block Block deserialize deserialize
  • 37. ExternalAppend OnlyMap key1 -> values key2 -> values key3 -> values
  • 38. ExternalAppend OnlyMap key1 -> values key2 -> values key3 -> values
  • 39. ExternalAppend OnlyMap Sort & Spill key1 -> values key2 -> values key3 -> values
  • 42. java.io.FileNotFoundException: /dn6/spark/local/spark-local- 20140610134115- 2cee/30/merged_shuffle_0_368_14 (Too many open files)
  • 43. Task Task Write stuff key1 -> values key2 -> values key3 -> values out key1 -> values key2 -> values key3 -> values Task
  • 44. Task Task Write stuff key1 -> values key2 -> values key3 -> values out key1 -> values key2 -> values key3 -> values Task
  • 45. Partition 1 File Partition 2 File Partition 3 File Records
  • 47. Single file Buffer Sort & Spill Partition 1 Records Partition 2 Records Partition 3 Records Index file
  • 49. ● No ● Distributed systems are complicated