5

I have an embedded device which runs linux.

I'm trying to speed up the boot sequence. Would rewriting large parts of the boot sequence in C speed up a lot?

for example I have a lot of scripts which test for this or that, then mount this or that. This is /etc/rcS.d/S03sysfs

#!/bin/sh

if [ -e /proc ] && ! [ -e /proc/mounts ]; then
  mount -t proc proc /proc
fi

if [ -e /sys ] && grep -q sysfs /proc/filesystems; then
  mount sysfs /sys -t sysfs
fi

exit 0

My guess is that if it was in C, it would be much faster right?

My questions :

Why isn't it already in C?
Would there be a speed gain to write this in C?

3 Answers 3

9

It would be somewhat faster in C, but the language choice isn't what affects performance the most. It is usually more effective to perform various tasks in parallel, rather than waiting for each to complete sequentially as simpler init systems do. For example, sshd and httpd could be started at the same time, since neither requires the other to be already running.


There is no single "Linux boot sequence". Each distribution has its own; there isn't even a single thing they all have in common. It can be in C, Perl, Haskell, anything; the only requirement is that an executable named /init be present in the initramfs, or /sbin/init in the root filesystem.

The /etc/rc?.d scheme is simply an extension of the Unix boot process of 20 years ago, maybe even 30 years. The earliest Unix systems were rebooted fairly rarely, so they would have a simple script, /etc/rc or similar, that would be launched by init and start various daemons sequentially.

Even today SysV init is being used to start all such scripts, although the exact method may vary. Originally, a system would start all scripts in /etc/rc?.d in order; currently Debian uses Makefile-style dependencies.

Some distributions – Ubuntu, Chrome OS, Fedora up to v14 – have switched to Upstart, which is written in C and is "event-based", allowing daemons to be started in parallel. Another init system, systemd, appears to be rising quickly in popularity – it is used by default in Fedora and OpenSuSE. It is also written in C. (Both systems still read textual configuration files to decide which daemons are to be started.)

Those distributions which still stick with SysVinit usually do it for "simplicity"; the most commonly heard [citation needed] arguments appear to be about shell scripts being easier to maintain than equivalent C code (although said shell scripts consist of 90% copypasta), as well as a mortal fear of introducing additional library dependencies [subjective]. You can see for yourself in this, this, this and this discussion threads on the Debian mailing list from May 2012.

(Disclaimer: I'm a systemd user myself.)

3
  • This is also an interesing read as well as: Why shouldn't we use the word 'here' in a textlink
    – Marco
    Commented May 4, 2012 at 15:57
  • I have only a single core on my device so parallel stuff will not boost speed very much would it? Unless my startup blocks a lot... does it?
    – user1190
    Commented May 4, 2012 at 19:09
  • @user1190: Waiting on IO is a common block, so even on a single core you should see gains.
    – Daenyth
    Commented May 4, 2012 at 19:25
5

Why isn't it already in C?

For cross-platform compatibility and because it allows to use sh files to describe the boot process. Maintaining boot scripts in C would be a PITA.

Would there be a speed gain to write this in C?

Not much. While some parts would be faster, the overall speed gain would be marginal. Most of the boot process is strictly sequential, specially the steps in init 1 and 2. From init 3 and upward, the boot processes could be paralellized using something like Runit, which would get you big speed gains.

7
  • The boot scripts do not have to be 100% C code; they could still read textual configuration files (such as fstab for filesystems), for example, this. Commented May 4, 2012 at 15:45
  • 1
    Why would GCC be a dependency for a program that can be distributed as a binary package? Remember that sysvinit is also written in C. (Also, as a maintainer of several Arch packages, I've found it much easier to ensure the consistency of 5-line unit files which work everywhere, than 20-line shell scripts which vary from distro to distro.) Commented May 4, 2012 at 16:00
  • 1
    The init system itself already is written in C and has the compiler as Build-Dep. On the other hand, the initscripts for each service do not have to be scripts at all; they can be simple textual configuration files, as in my earlier example. There is no reason for them to be compiled. Commented May 4, 2012 at 16:12
  • 2
    @grawity: We could call the text files "scripts" and the C code "the shell" and then -- oh wait, that's how it's set up now. Seriously, what sort of speed up do you think you might get? Bear in mind that the shell will already be read into RAM, the other reasons for keeping it that are mentioned on this page, and the very small performance gain with lots of admin pain...
    – mpez0
    Commented May 4, 2012 at 19:04
  • 1
    One difference is that sh scripts rely on a large number of external programs; even if they are cached in memory, fork()ing them still has a cost larger than that of a single mount() syscall (as in the OP's example). As for admin pain, the only change I've seen so far is decrease. (I feel that I'm getting involved in a third flamewar this week. I should stop.) Commented May 4, 2012 at 20:53
2

Most of what it spends its time doing is waiting for things, one after the other: filesystems, networks, etc.

If you want a really fast boot sequence, boot with init=/bin/sh - instant command prompt! If it's an embedded system, you could have it boot straight into your application.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .