dkms uses maxed out compression values, taking minutes to build modules rather than miliseconds
Upstream just got configurable compression merged, it will be in an upcoming version.
However, the person PRing it kept the current defaults, to avoid breakage.
The current defaults in question are completely maxed out for all compression algorithms.
That leads to compile times measured in minutes rather than miliseconds in zstd even on an old threadripper, as per a benchmark here - https://github.com/dell/dkms/issues/455#issue-2627299611
IMO there should be a push upstream for reducing the newly-added defaults to a more sensible value for a new major version, meanwhile adopting a lower compression default downstream in Arch Linux.
Why
It greatly reduces build times at the cost of having headers installed and mildly increased module sizes on the FS.
It would also, in theory, allow to simplify package management - for example, nvidia
and nvidia-lts
could be dropped, only retaining nvidia-dkms
, as nearly the entire reason for having prebuilt modules there is very long build times during updates.
Note the same does not apply for nvidia-open
in that example, where the long build times do not come solely from dkms.
There are two downsides that I see:
- It may break specific broken setups with undersized an
/boot
partition combined with building modules into initramfs. The lower compression values mildly increase file sizes, which would be able to exhaust the FS whereas it previously juuuust fit, despite being undersized. - If packages are dropped as a result, user now has to have kernel headers installed. Not a problem, but a consideration nonetheless.