0

I have a pool on which there is 3.41 TB of data, deduplication is enabled on the volumes, judging by the output of the

zpool status -D <pool_name>
...
dedup: DDT entries 73665285, size 696B on disk, 154B in core
...

I see that only 10 GB of DDT is stored in RAM, and if I load more data, the number of blocks will increase, and the bytes will decrease. by As far as I know, DDT is stored in the metadata of the ARC, and when outputting arc_meta_used, I see only about 8GB there, the limit for the amount of metadata(zfs_arc_meta_limit_percent) in the ARC is set to 75%, that is, it does not even reach it yet, the amount of RAM = 64GB. why is the entire deduplication table not being dumped into RAM?

according to this output my DDT size is 37,19GB

zdb -b pool
bp count:       124780196
    ganged count:           0
    bp logical:    3997925134336      avg:  32039
    bp physical:   3988307198976      avg:  31962     compression:   1.00
    bp allocated:  6056878956544      avg:  48540     compression:   0.66
    bp deduped:    2188370706432    ref>1: 15910160   deduplication:   1.36
    SPA allocated: 3868508110848     used: 25.36%

    additional, non-pointer bps of type 0:         95
    Dittoed blocks on same vdev: 3706666

Why is the table not paged to RAM? and how to unload it forcibly?

0

0

You must log in to answer this question.

Browse other questions tagged .