2

We have 1TB of memory on a server which hosts an SAP application.

When the app is running the memory usage shown in top is around 700GB. When the app is stopped the memory usage shown in top comes down to 10GB. A reboot of the server gets the memory usage to 1GB.

  • Even though no application is running, why does top show 10GB used and gains 9GB after reboot?
  • Is it possible to gain that 9GB without a reboot?

Output of free -g:

free -g 
                 total used   free shared buffers cached 
Mem:              1009  567   442    0      0     152 
-/+ buffers/cache:      415   594 
Swap:              1     0     1
2
  • 4
    First things first, have you read linuxatemyram.com? Next, please provide the output of free and even better, /proc/meminfo during each of these data points.
    – phemmer
    Commented Jul 4, 2014 at 21:45
  • What does the output of free show in these scenarios? It's likely that the 10G is still being shown as in use due to buffers and cache.
    – slm
    Commented Jul 5, 2014 at 1:49

2 Answers 2

5

Linux uses RAM in a different way from what other operating systems do.

Rather than sitting there with unused RAM, Linux stores data that it thinks might be used in RAM-any applications may be cached here, files, etc.

As a result, Linux RAM usage is higher than what is used by running applications. This extra usage is buffered to be sued by other things. Run free -h and the second row under used will show you that a lot of the "used" memory is really just cached.

In the case where all of your memory is cached and a program needs memory, it will remove enough from the cache to fit that program.

2
  • Windows has a filesystem cache as well. It's itemized differently but it's still shown in Task Manager.
    – Bratchley
    Commented Jul 5, 2014 at 16:46
  • But I agree. Op is probably just looking in the wrong field for their free memory.
    – Bratchley
    Commented Jul 5, 2014 at 16:55
0

It is due to some file descriptors still open though the app is stopped. You can list the open file descriptors using the techniques mentioned here.

If you need to close the file descriptors without rebooting, you can follow the approach mentioned by Graeme here.However, you need to be aware of the file descriptors that you are closing as highlighted by Graeme in his answer. His answer is,

To answer literally, to close all open file descriptors for bash:

for fd in $(ls /proc/$$/fd); do
  eval "exec $fd>&-"
done

However this really isn't a good idea since it will close the basic file descriptors the shell needs for input and output. If you do this, none of the programs you run will have their output displayed on the terminal (unless they write to the tty device directly). If fact in my tests closing stdin (exec 0>&-) just causes an interactive shell to exit.

What you may actually be looking to do is rather to close all file descriptors that are not part of the shell's basic operation. These are 0 for stdin, 1 for stdout and 2 for stderr. On top this some shells also seem to have other file descriptors open by default. In bash you have 255 (also for terminal I/O) and dash I have 10 which points to /dev/tty rather than the specific tty/pts device the terminal is using. To close everything apart from 0, 1, 2 and 255 in bash:

for fd in $(ls /proc/$$/fd); do
  case "$fd" in
    0|1|2|255)
      ;;
    *)
      eval "exec $fd>&-"
      ;;
  esac
done

Note also that eval is required when redirecting the file descriptor contained in a variable, if not bash will expand the variable but consider it part of the command (in this case it would try to exec the command 0 or 1 or whichever file descriptor you are trying to close). Also using a glob instead of ls (eg /proc/$$/fd/*) seems to open an extra file descriptor for the glob, so ls seems the best solution here.

Update

For further information on the portability of /proc/$$/fd, please see Portability of file descriptor links. If /proc/$$/fd is unavailable, then a drop in replacement for the $(ls /proc/$$/fd), using lsof (if that is available) would be $(lsof -p $$ -Ff | grep f[0-9] | cut -c 2-).

7
  • OP is asking about memory, not disk space.
    – phemmer
    Commented Jul 4, 2014 at 21:48
  • @Patrick, oh ok. So, the problem is more likely due to the open file descriptors right? I mean, the memory is not released due to some open file descriptors. Am I correct in my understanding?
    – Ramesh
    Commented Jul 4, 2014 at 21:51
  • Well one if the primary confusions in this subject is caching. However caching comes into play whether file descriptors are opened or not.
    – phemmer
    Commented Jul 4, 2014 at 21:52
  • @Patrick, thanks. so caching and file descriptors are not related in any way, is it?
    – Ramesh
    Commented Jul 4, 2014 at 21:55
  • Generally no they are not.
    – phemmer
    Commented Jul 5, 2014 at 6:20

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .