2

I have been trying to understand an issue I've had when running roribio16/alpine-sqs docker image on one of my machines. Whenever I try to run the image without specifying any other settings, docker run roribio16/alpine-sqs

[xxxx@yyyy ~]$ docker run roribio16/alpine-sqs
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/elasticmq.conf" during parsing
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/insight.conf" during parsing
2021-05-29 15:48:41,216 INFO Included extra file "/etc/supervisor/conf.d/sqs-init.conf" during parsing
2021-05-29 15:48:41,216 INFO Set uid to user 0 succeeded
2021-05-29 15:48:41,222 INFO RPC interface 'supervisor' initialized
2021-05-29 15:48:41,222 CRIT Server 'unix_http_server' running without any HTTP authentication checking
2021-05-29 15:48:41,222 INFO supervisord started with pid 1
2021-05-29 15:48:42,225 INFO spawned: 'sqs-init' with pid 9
2021-05-29 15:48:42,229 INFO spawned: 'elasticmq' with pid 10
2021-05-29 15:48:42,230 INFO spawned: 'insight' with pid 11
cp: can't stat '/opt/custom/*.conf': No such file or directory

> [email protected] start /opt/sqs-insight
> node index.js

15:48:42.605 [main] INFO  org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
Loading config file from "/opt/sqs-insight/lib/../config/config_local.json"
15:48:42.929 [elasticmq-akka.actor.default-dispatcher-2] INFO  akka.event.slf4j.Slf4jLogger - Slf4jLogger started
Unable to load queues for  undefined
Config contains 0 queues.
library initialization failed - unable to allocate file descriptor table - out of memorylistening on port 9325
2021-05-29 15:48:43,233 INFO success: sqs-init entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,233 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,234 INFO success: insight entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:43,234 INFO exited: sqs-init (exit status 0; expected)
2021-05-29 15:48:44,318 INFO exited: elasticmq (terminated by SIGABRT (core dumped); not expected)
2021-05-29 15:48:45,322 INFO spawned: 'elasticmq' with pid 67
15:48:45.743 [main] INFO  org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
15:48:46.044 [elasticmq-akka.actor.default-dispatcher-2] INFO  akka.event.slf4j.Slf4jLogger - Slf4jLogger started
library initialization failed - unable to allocate file descriptor table - out of memory2021-05-29 15:48:47,223 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:47,389 INFO exited: elasticmq (terminated by SIGABRT (core dumped); not expected)
2021-05-29 15:48:48,393 INFO spawned: 'elasticmq' with pid 89
15:48:48.766 [main] INFO  org.elasticmq.server.Main$ - Starting ElasticMQ server (0.15.0) ...
15:48:49.066 [elasticmq-akka.actor.default-dispatcher-3] INFO  akka.event.slf4j.Slf4jLogger - Slf4jLogger started
library initialization failed - unable to allocate file descriptor table - out of memory^C2021-05-29 15:48:49,559 INFO success: elasticmq entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2021-05-29 15:48:49,559 WARN received SIGINT indicating exit request
2021-05-29 15:48:49,559 INFO waiting for insight, elasticmq to die
2021-05-29 15:48:49,566 INFO stopped: insight (terminated by SIGTERM)
2021-05-29 15:48:50,431 INFO stopped: elasticmq (terminated by SIGABRT (core dumped))

With a bit of googling I found this post where somebody had the same issue when running some other random image, and then posted that they managed to get the image running by setting some ulimits when running the image, which also worked for me (docker run --ulimit nofile=122880:122880 roribio16/alpine-sqs).

I checked the ulimits set inside the container when I didn't use this configuration

docker exec -it ca bash
 $ ulimit -a

and found that the nofile setting was ridiculously high, which I assume is what is causing the container to run out of memory, if too many files are being opened simultaneously. I don't have a particulary good understanding of how this works though so would appreciate any clarification somebody could shed on that particular topic also.

Anyway the point of that ramble is that I want to try and find where the default docker container ulimits are set as I don't understand why they are so high on the machine I am using. I have another machine that does not have this problem.

I can find lots of ways to change the default limits but there does not seem to be much information about where these limits get set in the first place. I understand according to the docker documentation that if custom values are not set then the ulimits should be inherited from my system but as far as I can tell my system nofile settings are much lower than what I'm seeing in the container.

(Both machines run manjaro linux however the one that doesn't have this issue is XFCE and the one that does is KDE).

1 Answer 1

0

I found an explanation for the issue in this post.

At some point, Java 8 seems to allocate memory for the defined number of open files. If this number is too large, the java process quits with the reported error message:

library initialization failed - unable to allocate file descriptor table - out of memory*** JBossAS process (85) received ABRT signal ***

In my case, I tried to start a Wildfly 17 application server with java 8 in a Docker container. The user, under which the server should be started, has a huge number of open files set:

bash-4.2$ ulimit -a 
core file size          (blocks, -c) unlimited 
data seg size           (kbytes, -d) unlimited 
scheduling priority             (-e) 0 
file size               (blocks, -f) unlimited 
pending signals                 (-i) 29783 
max locked memory       (kbytes, -l) 64 
max memory size         (kbytes, -m) unlimited 
open files                      (-n) 1073741816 
pipe size            (512 bytes, -p) 8 
POSIX message queues     (bytes, -q) 819200 
real-time priority              (-r) 0 
stack size              (kbytes, -s) 8192 
cpu time               (seconds, -t) unlimited 
max user processes              (-u) unlimited 
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

In the current Docker version (for me: 20.10.21) the default number of open files is infinity. You have to note, that the amount of open files for the docker daemon (dockerd) differs from the amount of open files, containers are started with.

The amount of open files for the docker daemon is configured in the systemd service configuration file. You can overwrite it with the command systemctl edit docker.service. This will create a new file in /etc/systemd/system/docker.service/docker.service.d/. In my case i named the file override.conf. I set the following properties:

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd --default-ulimit nofile=8096:8096 --containerd=/run/containerd/containerd.sock
Environment="HTTP_PROXY=http://xxx.xxx.xxx.xxxx:yyyy"
Environment="HTTPS_PROXY=http://xxx.xxx.xxx.xxxx:yyyy"
**# LimitNOFILE=infinity**

Note, that I did not overwrite the default value (infinity) for the docker daemon itself. You can check the actual amount of open files being set for a process with the command # cat /proc/$pid/limits ($pid = process id). This prints a huge amount of open files for the docker daemon:

# cat /proc/828/limits
Limit                     Soft Limit           Hard Limit           Units
Max cpu time              unlimited            unlimited            seconds
Max file size             unlimited            unlimited            bytes
Max data size             unlimited            unlimited            bytes
Max stack size            8388608              unlimited            bytes
Max core file size        unlimited            unlimited            bytes
Max resident set          unlimited            unlimited            bytes
Max processes             unlimited            unlimited            processes
Max open files            1073741816           1073741816           files
Max locked memory         65536                65536                bytes
Max address space         unlimited            unlimited            bytes
Max file locks            unlimited            unlimited            locks
Max pending signals       29783                29783                signals
Max msgqueue size         819200               819200               bytes
Max nice priority         0                    0
Max realtime priority     0                    0
Max realtime timeout      unlimited            unlimited            us

As you already found out with your linked post, you have to limit the amount of open files a docker container is started with. Processes inside the container inherit from this value (-> ulimit -a). This is achieved with the program argument --default-ulimit nofile=8096:8096 for the docker daemon, which is also defined in my service configuration file /etc/systemd/system/docker.service.d/override.conf. The value of 8096 open files is more than enough for my wildfly application server and solved the problem for me.

I hope this answer helps to clarify the issue.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .