No Mercy - eviltoast
  • uis@lemm.ee
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    4 months ago

    First, this is an argument that derived from first generation microkernels, and in particular, MINIX, which - as a teaching aid OS, never tried to play the benchmark game.

    Indeed, first generation microkernels were so bad, that Jochen Liedtke in rage created L3 “to show how it’s done”. While it was faster than existing microkernels, it was still slow.

    One paper notes that, once the working code exceeds the L2 cache size, there is marginal advantage to the monolithic structure.

    1. The paper was written in pre-meltdown era.
    2. The paper is about hybrid kernels. And gutted Mach(XNU) is used as example.
    3. Nowdays(after meltdown) all cache levels are usually invalidated during context switch. Processors try to add mechanisms to avoid this, but they create new vulnreabilities.

    A second paper running benchmarks on L4Linux vs Linux concluded that the microkernel penalty was only about 5%-10% slower for applications than the Linux monolithic kernel.

      1. Waaaaay before meltdown era.

    I’ll mark quotes from paper as doublequotes.

    a Linux version that executes on top of a first- generation Mach-derived µ-kernel.

    1. So, hybrid kernel. Not as bad as microkernel.

    The corresponding penalty is 5 times higher for a co-located in-kernel version of MkLinux, and 7 times higher for a user- level version of MkLinux.

    Wait, what? Co-located in-kernel? So, loadable module?

    In particular, we show (1) how performance can be improved by implementing some Unix services and variants of them directly above the L4 µ-kernel

    1. No surprise here. Hybrids are faster than microkernels. Kinda proves my point, that moving close to monolithic improves performance.

    Right now I stopped at the end of second page of this paper. Maybe will continue later.

    this blog entry

    Will read.