100

When I run the following command, I expect the exit code to be 0 since my combined container runs a test that successfully exits with an exit code of 0.

docker-compose up --build --exit-code-from combined

Unfortunately, I consistently receive an exit code of 137 even when the tests in my combined container run successfully and I exit that container with an exit code of 0 (more details on how that happens are specified below).

Below is my docker-compose version:

docker-compose version 1.25.0, build 0a186604

According to this post, the exit code of 137 can be due to two main issues.

  1. The container received a docker stop and the app is not gracefully handling SIGTERM
  2. The container has run out of memory (OOM).

I know the 137 exit code is not because my container has run out of memory. When I run docker inspect <container-id>, I can see that "OOMKilled" is false as shown in the snippet below. I also have 6GB of memory allocated to the Docker Engine which is plenty for my application.

[
    {
        "Id": "db4a48c8e4bab69edff479b59d7697362762a8083db2b2088c58945fcb005625",
        "Created": "2019-12-12T01:43:16.9813461Z",
        "Path": "/scripts/init.sh",
        "Args": [],
        "State": {
            "Status": "exited",
            "Running": false,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false, <---- shows container did not run out of memory
            "Dead": false,
            "Pid": 0,
            "ExitCode": 137,
            "Error": "",
            "StartedAt": "2019-12-12T01:44:01.346592Z",
            "FinishedAt": "2019-12-12T01:44:11.5407553Z"
        },

My container doesn't exit from a docker stop so I don't think the first reason is relevant to my situation either.

How my Docker containers are set up

I have two Docker containers:

  1. b-db - contains my database
  2. b-combined - contains my web application and a series of tests, which run once the container is up and running.

I'm using a docker-compose.yml file to start both containers.

version: '3'
services:
    db:
        build:
            context: .
            dockerfile: ./docker/db/Dockerfile
        container_name: b-db
        restart: unless-stopped
        volumes:     
            - dbdata:/data/db
        ports:
            - "27017:27017"
        networks:
            - app-network

    combined:
        build:
            context: .
            dockerfile: ./docker/combined/Dockerfile
        container_name: b-combined
        restart: unless-stopped
        env_file: .env
        ports:
            - "5000:5000"
            - "8080:8080"
        networks:
            - app-network
        depends_on:
            - db

networks:
    app-network:
        driver: bridge

volumes:
    dbdata:
    node_modules:

Below is the Dockerfile for the combined service in docker-compose.yml.

FROM cypress/included:3.4.1

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 5000

RUN npm install -g history-server nodemon

RUN npm run build-test

EXPOSE 8080

COPY ./docker/combined/init.sh /scripts/init.sh

RUN ["chmod", "+x", "/scripts/init.sh"]

ENTRYPOINT [ "/scripts/init.sh" ]

Below is what is in my init.sh file.

#!/bin/bash
# Start front end server
history-server dist -p 8080 &
front_pid=$!

# Start back end server that interacts with DB
nodemon -L server &
back_pid=$!

# Run tests
NODE_ENV=test $(npm bin)/cypress run --config video=false --browser chrome

# Error code of the test
test_exit_code=$?

echo "TEST ENDED WITH EXIT CODE OF: $test_exit_code"

# End front and backend server
kill -9 $front_pid
kill -9 $back_pid

# Exit with the error code of the test
echo "EXITING SCRIPT WITH EXIT CODE OF: $test_exit_code"
exit "$test_exit_code"

Below is the Dockerfile for my db service. All its doing is copying some local data into the Docker container and then initialising the database with this data.

FROM  mongo:3.6.14-xenial

COPY ./dump/ /tmp/dump/

COPY mongo_restore.sh /docker-entrypoint-initdb.d/

RUN chmod 777 /docker-entrypoint-initdb.d/mongo_restore.sh

Below is what is in mongo_restore.sh.

#!/bin/bash
# Creates db using copied data
mongorestore /tmp/dump

Below are the last few lines of output when I run docker-compose up --build --exit-code-from combined; echo $?.

...
b-combined | user disconnected
b-combined | Mongoose disconnected
b-combined | Mongoose disconnected through Heroku app shutdown
b-combined | TEST ENDED WITH EXIT CODE OF: 0 ===========================
b-combined | EXITING SCRIPT WITH EXIT CODE OF: 0 =====================================
Aborting on container exit...
Stopping b-combined   ... done
137

What is confusing as you can see above, is that the test and script ended with exit code of 0 since all my tests passed successfully but the container still exited with an exit code of 137.

What is even more confusing is that when I comment out the following line (which runs my Cypress integration tests) from my init.sh file, the container exits with a 0 exit code as shown below.

NODE_ENV=test $(npm bin)/cypress run --config video=false --browser chrome

Below is the output I receive when I comment out / remove the above line from init.sh, which is a command that runs my Cypress integration tests.

...
b-combined | TEST ENDED WITH EXIT CODE OF: 0 ===========================
b-combined | EXITING SCRIPT WITH EXIT CODE OF: 0 =====================================
Aborting on container exit...
Stopping b-combined   ... done
0

How do I get docker-compose to return me a zero exit code when my tests run successfully and a non-zero exit code when they fail?

EDIT:

After running the following docker-compose command in debug mode, I noticed that b-db seems to have some trouble shutting down and potentially is receiving a SIGKILL signal from Docker because of that.

docker-compose --log-level DEBUG up --build --exit-code-from combined; echo $?

Is this indeed the case according to the following output?

...
b-combined exited with code 0
Aborting on container exit...
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/json?limit=-1&all=1&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Db-property%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 3819
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/json?limit=-1&all=0&size=0&trunc_cmd=0&filters=%7B%22label%22%3A+%5B%22com.docker.compose.project%3Db-property%22%2C+%22com.docker.compose.oneoff%3DFalse%22%5D%7D HTTP/1.1" 200 4039
http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/attach?logs=0&stdout=1&stderr=1&stream=1 HTTP/1.1" 101 0
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None
Stopping b-combined   ...
Stopping b-db         ...
Pending: {<Container: b-db (0626d6)>, <Container: b-combined (196f3e)>}
Starting producer thread for <Container: b-combined (196f3e)>
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
Pending: {<Container: b-db (0626d6)>}
http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/wait HTTP/1.1" 200 32
http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/stop?t=10 HTTP/1.1" 204 0
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "POST /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561bStopping b-combined   ... done
Finished processing: <Container: b-combined (196f3e)>
Pending: {<Container: b-db (0626d6)>}
Starting producer thread for <Container: b-db (0626d6)>
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None
Pending: set()
Pending: set()
Pending: set()
Pending: set()
Pending: set()
Pending: set()
http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None
http://localhost:None "POST /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/stop?t=10 HTTP/1.1" 204 0
http://localhost:None "POST /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/wait HTTP/1.1" 200 30
Stopping b-db         ... done
Pending: set()
http://localhost:None "GET /v1.25/containers/0626d6bf49e5236440c82de4e969f31f4f86280d6f8f555f05b157fa53bae9b8/json HTTP/1.1" 200 None
http://localhost:None "GET /v1.25/containers/196f3e622847b4c4c82d8d761f9f19155561be961eecfe874bbb04def5b7c9e5/json HTTP/1.1" 200 None
137

9 Answers 9

102

Docker exit code 137 may imply Docker doesn't have enough RAM to finish the work.

Unfortunately Docker consumes a lot of RAM.

Go to Docker Desktop app > Preferences > Resources > Advanced and increase the MEMORY - best to double it.

5
  • 9
    However, exit code 137 doesn't always indicate an OOM issue. For instance, in my case, I'm dealing with exit code 137 but with the following details: "State":"Status":"exited","Running": false,"Paused": false,"Restarting": false,"OOMKilled": false,"Dead": false, "Pid": 0,"ExitCode": 137,"Error": "linux runtime spec devices: error gathering device information while adding custom device \"/dev/ttyUSB0\": no such file or directory", Commented Oct 4, 2021 at 12:51
  • 3
    This answer helped me, even though it isn't relevant to the question. As of this writing, at least, question includes: """I know the 137 exit code is not because my container has run out of memory. When I run docker inspect <container-id>, I can see that "OOMKilled" is false as shown in the snippet below. I also have 6GB of memory allocated to the Docker Engine which is plenty for my application.""" Commented Mar 16, 2023 at 21:00
  • 3
    The important parts of the docker inspect output are "OOMKilled": false and "ExitCode": 137. If OOMKilled is false then the memory limit was not the cause of exit code 137. Commented Mar 16, 2023 at 21:07
  • 3
    Thanks @DavidWiniecki - wasn't aware of the fact that 137 error may have other causes than OOM. In your case - if OOMKilled was false, what was the reason for the 137 error?
    – codeepic
    Commented Mar 18, 2023 at 22:52
  • 1
    In my case I have a container that never exits before timeout no longer how long the timeout is (its Dockerfile has CMD python3 -c 'input()') so docker inspect always reports "ExitCode": 137 after it kills it. OP's question was about something similar as seen in this answer. (That's not causing me any pain though. Actually I was having a problem with OOM with a different container, which is why this answer was helpful to me even though it's not completely related to OP's question.) Commented Mar 20, 2023 at 22:23
24

The error message strikes me as: Aborting on container exit...

From docker-compose docs:

--abort-on-container-exit Stops all containers if any container was stopped.

Are you running docker-compose with this flag? If that is the case, think about what it means.

Once b-combined is finished, it simply exits. That means, container b-db will be forced to stop as well. Even though b-combined returned with exit code 0, b-db forced shutdown was likely not handled gracefully by mongodb.

EDIT: I just realized you have --exit-code-from in the command line. That implies --abort-on-container-exit.

Solution: b-db needs more time to exit gracefully. Using docker-compose up --timeout 600 avoids the error.

4
  • 2
    Yep, since I'm using --exit-code-from, it implies --abort-on-container-exit - documentation here.
    – ptk
    Commented Dec 12, 2019 at 3:56
  • Part of the debugging I did was to start b-db alone and then run docker stop on that container and the container exited with a 0 exit code, which led me to believe that b-db is not the problem. Additionally, if I comment out NODE_ENV=test $(npm bin)/cypress run --config video=false --browser chrome and in the last line of init.sh I have exit 12, the exit code I receive is 12...
    – ptk
    Commented Dec 12, 2019 at 3:59
  • 1
    Alright, here's a simpler but similar compose. Service b depends on service a. Running with the same command line, notice there is a message for container a stopping Stopping tmp_a_1 ... done. Your example does not show that. Docker allows a default grace period of 10s for containers to stop. Is it possible that mongodb is not stopping gracefully in that period? Try using docker-compose up --timeout 600.
    – JulioHM
    Commented Dec 12, 2019 at 4:42
  • You're 100% correct regarding the timeout. Problem was b-db was not exiting within the default grace period and using the --timeout 600 option completely fixed the issue. Thank you :)
    – ptk
    Commented Dec 12, 2019 at 4:56
15

It might be issue with RAM, I have increased the memory using Docker preferences. Started my service with docker-compose up. The server is up for me without any issue.

https://www.petefreitag.com/item/848.cfm

1
  • 1
    this was the problem for me, bumped up memory from 2 to 4 gb
    – Sibish
    Commented Jun 23, 2021 at 20:08
3

I had a similar problem when i was setting up a python code. This error is due to the memory allocation. In my case the memory allocated was 2 GB. I changed it to 4 GB and it worked. For mac, go to the docker icon on the top tool bar of the screen -> Preferences -> Resources -> Memory.

2

I had plenty of RAM allocated, but on Mac M1, docker compose was not telling me the real story (even with --verbose). But I received this output when running "bare" with docker run:

WARNING: The requested image's platform (linux/amd64) does not match the detected host platform (linux/arm64/v8) and no specific platform was requested
Killed

That was the issue for me -- need to take care of image architecture.

1
  • Same for me - I had been playing around with my terminal settings due to previous issues with rosetta and had disabled the `Open using Rosetta" option. Enabling this fixed it.
    – Jordan
    Commented Jul 27, 2023 at 14:40
0

Not sure if this will resolve your issue but can you try adding FROM scratch for combined dockerfile?

Docker runs instructions in a Dockerfile in order. A Dockerfile must start with a FROM instruction. The FROM instruction specifies the Base Image from which you are building. FROM may only be preceded by one or more ARG instructions, which declare arguments that are used in FROM lines in the Dockerfile. Ref :https://docs.docker.com/v17.09/engine/reference/builder/#format

Also instead of using docker-compose up try doing it in 2 steps to gather more information about where the issue is occuring.

  1. docker-compose --log-level DEBUG build
  2. docker-compose --log-level DEBUG run

https://docs.docker.com/compose/reference/overview/ [edited per the comments]

3
  • Apologies, I realised that I missed two lines at the top of my Dockerfile. I've updated my question to reflect my full Dockerfile for combined. Thanks for alerting me to the --log-level command. I'm taking a look right now. Just a note, the command should be docker-compose --log-level DEBUG build
    – ptk
    Commented Dec 12, 2019 at 4:15
  • 1
    I am glad it helped to get more info! :) Commented Dec 13, 2019 at 5:04
  • 1
    Note, --log-level is DEPRECATED and not working from 2.0.
    – mushipao
    Commented May 6, 2022 at 13:37
0

In docker. Go to settings -> Docker engine -> update the defaultKeepStorage to 30GB to solve this issue.

{
  "builder": {
    "gc": {
      "defaultKeepStorage": "30GB",
      "enabled": true
    }
  },
  "debug": false,
  "experimental": false,
  "features": {
    "buildkit": false
  },
  "insecure-registries": [],
  "registry-mirrors": []
}
-1

Docker defaults the Linux kernel to docker-desktop-data (Default)

Run the following command using powershell:

wsl -l 

If wsl is not a recognize command browse to the following location:

c:\windows\system32\

Now run the following command:

.\wsl.exe -l

You should see the following:

List of Distributions

If you don't see Ubuntu on the list you can install it using the following command:

.\wsl.exe --install -d Ubuntu-22.04

run the following command to set your distribution to Ubuntu:

.\wsl.exe --set-version Ubuntu-22.04 2
-1

If you share an enough memory for docker and it worked earlier, try to use

docker system prune

Not the answer you're looking for? Browse other questions tagged or ask your own question.