13

I have lot of memory (32Gb) on my laptop though still have issues with having amount of free memory. I am using linux (Fedora 27) and this happens after while since reboot.

If you check free output, found, that memory looks ok and that there 19Gb cached memory which theoretically should be freed on demand:

# free -h
              total        used        free      shared  buff/cache   available
Mem:            30G         10G        419M        768M         19G        624M
Swap:          999M        999M        280K

But i tried to start virtual machine which should get 2Gb of memory and got "Cannot allocate memory".

Looked at cat /proc/meminfo and found that most of cached memory went to Slab - SUnreclaim point:

# cat /proc/meminfo
MemTotal:       32310876 kB
MemFree:          387332 kB
MemAvailable:     624464 kB
Buffers:           15120 kB
Cached:          1379140 kB
SwapCached:         7316 kB
Active:         10350772 kB
Inactive:        1330164 kB
Active(anon):   10028184 kB
Inactive(anon):  1085388 kB
Active(file):     322588 kB
Inactive(file):   244776 kB
Unevictable:         900 kB
Mlocked:             900 kB
SwapTotal:       1023996 kB
SwapFree:              0 kB
Dirty:              3940 kB
Writeback:             0 kB
AnonPages:      10280264 kB
Mapped:           761148 kB
Shmem:            827040 kB
Slab:           19615756 kB
SReclaimable:      80356 kB
SUnreclaim:     19535400 kB
KernelStack:       30272 kB
PageTables:       161940 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:    17048360 kB
Committed_AS:   28120088 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:         0 kB
ShmemHugePages:        0 kB
ShmemPmdMapped:        0 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:     128
HugePages_Free:      128
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
Hugetlb:          262144 kB
DirectMap4k:    15651468 kB
DirectMap2M:    17266688 kB
DirectMap1G:     1048576 kB

Checked slabtop, found that most of consume went to kmalloc-2048:

 # slabtop -o
 Active / Total Objects (% used)    : 10959485 / 11158942 (98,2%)
 Active / Total Slabs (% used)      : 653007 / 653007 (100,0%)
 Active / Total Caches (% used)     : 112 / 134 (83,6%)
 Active / Total Size (% used)       : 19572995,47K / 19615517,82K (99,8%)
 Minimum / Average / Maximum Object : 0,01K / 1,76K / 23,12K

  OBJS ACTIVE  USE OBJ SIZE  SLABS OBJ/SLAB CACHE SIZE NAME                   
9692082 9692082 100%    2,00K 623116       16  19939712K kmalloc-2048           
218790 218790 100%    0,02K   1287      170      5148K avtab_node             
120140 116705  97%    0,20K   6007       20     24028K vm_area_struct         
106794  47394  44%    0,04K   1047      102      4188K Acpi-Namespace         
103936 103832  99%    0,01K    203      512       812K kmalloc-8              
 99200  92297  93%    0,03K    775      128      3100K kmalloc-32             
 89024  85587  96%    0,06K   1391       64      5564K pid                    
 88320  87190  98%    0,02K    345      256      1380K kmalloc-16             
 70476  40684  57%    0,19K   3356       21     13424K dentry                 
 64576  40757  63%    0,06K   1009       64      4036K kmalloc-64             
 52210  50218  96%    0,09K   1135       46      4540K anon_vma               
 43200  36795  85%    0,25K   1350       32     10800K filp                   
 40960  35936  87%    0,02K    160      256       640K selinux_file_security  
 37950  37731  99%    0,13K   1265       30      5060K kernfs_node_cache      
 32200  21252  66%    0,57K   1150       28     18400K radix_tree_node        
 23556  21252  90%    0,59K    906       26     14496K inode_cache            
 23409  13952  59%    1,06K    855       30     27360K ext4_inode_cache       
 20224  20014  98%    0,06K    316       64      1264K ebitmap_node           
 19530  15676  80%    0,09K    465       42      1860K kmalloc-96             
 19210  10443  54%    0,05K    226       85       904K ftrace_event_field     
 13398   7867  58%    0,75K    638       21     10208K xfrm_state             
 13216  13117  99%    0,07K    236       56       944K Acpi-Operand           
 11949  11689  97%    0,19K    569       21      2276K kmalloc-192            
 10569   8405  79%    0,10K    271       39      1084K buffer_head            
 10404  10404 100%    0,04K    102      102       408K ext4_extent_status     
  9775   6827  69%    0,70K    425       23      6800K shmem_inode_cache      
  9472   7628  80%    0,12K    296       32      1184K kmalloc-128            
  8823   8823 100%    0,08K    173       51       692K Acpi-State             
  6528   5762  88%    0,03K     51      128       204K avc_xperms_data        
  5616   4745  84%    0,50K    177       32      2832K kmalloc-512            
  5250   4268  81%    0,19K    250       21      1000K cred_jar               
  5110   5110 100%    0,05K     70       73       280K mbcache                
  4488   3876  86%    0,66K    187       24      2992K proc_inode_cache       
  3904   3017  77%    0,06K     61       64       244K kmem_cache_node        
  3808   3808 100%    0,14K    136       28       544K ext4_groupinfo_4k      
  3542   3369  95%    0,69K    154       23      2464K sock_inode_cache       
  3532   2875  81%    1,00K    113       32      3616K kmalloc-1024           
  3296   3021  91%    0,25K    103       32       824K kmalloc-256            
  3232   3232 100%    0,12K    101       32       404K seq_file               
  3162   3162 100%    0,04K     31      102       124K pde_opener             
  3040   2540  83%    0,25K     97       32       776K skbuff_head_cache      
  2968   2968 100%    0,07K     53       56       212K eventpoll_pwq          
  2784   2746  98%    1,00K     87       32      2784K UNIX                   
  2618   2518  96%    0,12K     77       34       308K jbd2_journal_head      
  2496   2392  95%    0,25K     78       32       624K proc_dir_entry         
  2432   2432 100%    0,12K     76       32       304K secpath_cache          
  2192   2103  95%    7,88K    551        4     17632K task_struct            
  2024   1852  91%    0,09K     44       46       176K trace_event_file       
  1950   1783  91%    1,06K     65       30      2080K signal_cache           
  1768   1768 100%    0,12K     52       34       208K cfq_io_cq              
  1638   1638 100%    0,10K     42       39       168K blkdev_ioc             
  1530   1530 100%    0,23K     46       34       368K posix_timers_cache     
  1512    864  57%    0,38K     72       21       576K kmem_cache             
  1368   1368 100%    0,16K     57       24       228K kvm_mmu_page_header    
  1300    616  47%    0,31K     52       25       416K nf_conntrack           
  1040   1040 100%    1,19K     40       26      1280K mm_struct              
  1036    955  92%    2,06K     70       15      2240K sighand_cache          

Why is it so huge and is there way to purge it without reboot?

2 Answers 2

0

This is normal for Unix/Linux/BSD. When you read in pages from disk for any reason they get stuffed into cache and left there. The memory is available if you need it but it cost overhead to free it up and if you need that same disk again you don't have to read it into memory again. Notice your Buff/Cache is 19G? Only 10G is actually in use. If you do run out of memory your system will start using SWAP and everything will slow down depending on how fast your SWAP device is. Then you have a problem.

-1

Your available memory or "Ram" is low (0.6 gb) ONE --the problem may occur because the system BIOS is outdated (improbable) --in BIOS menu apply default settings -- in BIOS menu go to advanced tab ,then performance ,if Overclocking enabled disable xmp (Extreme Memory Profile) ,or butter disable Overclocking . Then see if Available memory increased. SECOND I suggest to limit memory usage with command set-property. In terminal type systemd-cgls, output show hierarchy of control groups. Determine which "session-*.scope" (eg: session-3.scope) Then type systemctl set-property session-3.scope MemoryLimit=15G

reboot

again see if Available memory increased,if worked visit enter link description here

1
  • 1
    As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.
    – Community Bot
    Commented Apr 26 at 15:52

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .