Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

startup bash too slow with nvm loading in .bashrc config #776

Closed
mzvast opened this issue Aug 5, 2016 · 49 comments
Closed

startup bash too slow with nvm loading in .bashrc config #776

mzvast opened this issue Aug 5, 2016 · 49 comments

Comments

@mzvast
Copy link

mzvast commented Aug 5, 2016

Describe

It is too slow to startup a new bash shell, 10sec or so, with following 2 lines at the bottom of .bashrc:

export NVM_DIR="/home/mzvast/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm

In TMUX, it is extremely terrible to split a window and wait for a new bash to init.

It seems that every new bash needs to be inited with same amount of time,no matter whether there has bash instances been running in background.

Reproduce steps

  1. run curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.31.4/install.sh | bash
  2. reopen bash, it gets slow on init immediately

OS version,etc.

system:14393 win10 pro x64
hardware: Intel i5 2.5GHz, 6GB RAM, 256GB SSD

@adouzzy
Copy link

adouzzy commented Aug 5, 2016

probably you should exam your .bashrc
For initializing fasd would cause slow startup

@iz0eyj
Copy link

iz0eyj commented Aug 5, 2016

No lag on my Core I7 & SSD Samsung 850 pro, startup time is near 0 sec.

Strange thing, some second to wait using the "sudo" command.

@mzvast
Copy link
Author

mzvast commented Aug 5, 2016

@adouzzy You are right, after commenting out following lines,which loads nvm ,it starts fine again.

#export NVM_DIR="/home/mzvast/.nvm"
#[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm

I think this may be a performance issue with bash on windows, because the same bash config startup very fast in my vps.

@mzvast mzvast changed the title startup bash too slow Aug 5, 2016
@benhillis
Copy link
Member

I've also noticed nvm slowing down bash launch. It would be very helpful if we could narrow down what is causing the delay via an strace.

@mzvast mzvast changed the title startup bash too slow with some bashrc config Aug 6, 2016
@mzvast mzvast changed the title startup bash too slow with nvm koads in .bashrc config Aug 6, 2016
@mzvast mzvast changed the title startup bash too slow with nvm loads in .bashrc config Aug 6, 2016
@arcanis
Copy link

arcanis commented Aug 6, 2016

This can be partially solved by using --no-use, but the real culprit definitely is nvm use and, to a lower extend, nvm version.

@rcdosado
Copy link

rcdosado commented Nov 2, 2016

after commenting the command below in .bashrc, i have removed a 4 second+ delay to the loading of the terminal, any idea how to make this faster at least?thanks, using ubuntu 16 :

export NVM_DIR="/home/username/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh" # This loads nvm

@davatron5000
Copy link

davatron5000 commented Dec 9, 2016

Confirming. It's adding ~30s to shell startup on my machine. (Core i7 6500k w/ M.2 SSD)

me@computer:~$ time (source "$NVM_DIR/nvm.sh")

real    0m29.657s
user    0m1.484s
sys     0m32.141s

Edit: Workaround. Use n instead. https://github.com/tj/n

me@computer:~$ time (source ~/.bashrc)

real    0m1.806s
user    0m0.156s
sys     0m1.594s
@reybango
Copy link

Confirming that this is still an issue in the latest Selfhost.

@sunilmut
Copy link
Member

Thanks for the report @reybango.

If someone can collect an strace with relative timestamp and share them out, then it will be easy to see where the delay is coming from.

strace -t -o ‘trace_file’ -ff $NVM_DIR/nvm.sh

And, share out the trace files.

@arcanis
Copy link

arcanis commented Mar 23, 2017

Here are my trace files:

trace.tar.gz

@bmayen
Copy link

bmayen commented Mar 23, 2017

I get "strace: exec: Exec format error" when running that :/

@asclines
Copy link

I would like to say that I (like @bmayen ) get strace: exec: Exec format error when running strace

@reybango
Copy link

@sunilmut could you give more precise instructions on running the strace?

@arcanis
Copy link

arcanis commented Mar 23, 2017

I believe the right command is strace -t -o ‘trace_file’ -ff bash $NVM_DIR/nvm.sh, otherwise you're asking strace to exec a shell script, which won't work.

@asclines
Copy link

Upon running the strace command the way @arcanis suggested, several trace_files (215) were created. Is this an expected result? If so, what exactly is @sunilmut expecting from this?

@arcanis
Copy link

arcanis commented Mar 23, 2017

Being a bash script, nvm is executing various command in various processes. Strace is logging each of these processes in a separate file. Since I'm not sure which is the best way to sort them without losing info, I just tarball'd them :|

@sunilmut
Copy link
Member

@arcanis - Thanks for sharing the traces. The idea is to see which command is taking long to execute. Since the trace file has the timestamps, it would hopefully make it easier to gather that information. I haven't gone through the traces yet, but it's available here for anyone to parse and see.

@stehufntdev
Copy link
Collaborator

We tracked down the slow nvm start-up time using xperf, and it’s a known perf bottle neck in clone\fork when copying address space information. We were already planning on the fix so it should be out to insider builds soon.

@mrmckeb
Copy link

mrmckeb commented Apr 26, 2017

@stehufntdev Will releases coincide with Windows releases? Or is there a chance of a mid-cycle update? As this is still technically a pre-release, it would be cool to see features added to the WSL more regularly - installing the Insider builds of Windows can obviously have other effects...

@stehufntdev
Copy link
Collaborator

@mrmckeb thanks for the feedback. Pushing out changes to the Creator's update is currently gated through servicing criteria which has a pretty high bar and requires data on the impact. We would like to get to the point were mid-cycle updates can go out, but unfortunately aren't there yet. This change will likely fall into the other category where it is flighted to insider builds and available in the next official release of Windows.

@stehufntdev
Copy link
Collaborator

A fix for this should be out for insider builds soon that reduces nvm start-up by ~60%. On my test vm it took the launch from 12 seconds to 5 seconds. @benhillis reminded me that percentages are hard, and it sounds much better if we say it's ~2.5x!!! faster :).

We understand where the remaining time is being spent and are tracking additional work to bring this down closer to native Linux speeds.

@reybango
Copy link

@stehufntdev note that I had seen better perf in 16179 than in 16184. Went from maybe 5-10 seconds max to nearly 2 minutes now. I commented the loading of nvm in .zshrc with no improvement. Then I commented loading the zsh shell (oh my zsh) altogether and the normal bash shell loads near instantaneous.

Now the one behavior I'm seeing while loading the zsh shell is that during the long length of time, if I break out using CTRL-C, the zsh shell is loaded. Maybe this is some type of hang or race condition?

@stehufntdev
Copy link
Collaborator

@reybango, thanks for the update. Can you start a new thread on the zsh issue so we can take a look?

@djensen47
Copy link

I'm using 16241 and it's still unacceptably slow. 😕

@EdwinHu233
Copy link

The nvm path slowed down my bash (and zsh) on my Ubuntu 16.04, too. After removing it from .bashrc (or .zshrc) it's much better.

@JimmyBjorklund
Copy link

Problem solved fix don't use nvm =)
Thanks

@alberduris
Copy link

This is still an issue.

@peey
Copy link

peey commented Feb 27, 2018

Related Thread: nvm-sh/nvm#782

If you installed node without nvm (the system installation), and that is what you usually use, and you use nvm for only switching to other versions of node, then this comment on that thread is a nice workaround

@jcklpe
Copy link

jcklpe commented Nov 24, 2018

Still having this problem. Wicked slow.

@jcklpe
Copy link

jcklpe commented Nov 24, 2018

Thanks for the snippet @well1791 it didn't speed stuff up much but it did stop stuff from constantly telling me that nvm wasn't installed when it was.

@KenG98
Copy link

KenG98 commented Dec 11, 2018

I'm also experiencing this. Quick fix for people who don't use nvm too often - in your bashrc put the nvm related lines into a function and call the function when you need to use nvm. This way it doesn't slow bash down while starting, you only need to wait right before you use it for the first time.

Gist: https://gist.github.com/KenG98/2d084a9859637cdd1614ba27485e2ef9

@ravron
Copy link

ravron commented Jun 21, 2019

I've wrapped nvm setup in a function, so it's lazy-loaded but seamless to use. Give it a shot!

https://github.com/ravron/dotfiles/blob/2093bb4b257db221f31fa900cfc8cd394476a7cd/.bashrc#L233-L243

@djensen47
Copy link

I switched to asdf for loading different versions of node (and others) and it solved my problem. Start up times are super fast now.

@therealkenc
Copy link
Collaborator

therealkenc commented Jun 26, 2019

I've wrapped nvm setup in a function

It's a alright approach don't get me wrong. But the idea is premised on running nvm from your shell commandline, thus triggering the bash shell function lazy-load.

Unfortunately (?) bash shell functions aren't inherited. But the whole node ecosys assumes node is set up just-so before being evoked.

So use this if it works for y'allz particular needs. No 'dis on the on the perf improvement if you figure you'll only ever call nvm from the bash commmandline first. But users should understand what that "seamless" solution is doing before you paste it into your .bashrc. [read: if this worked, you'd think the makers of nvm woulda done that, eh.]

@z1yuan
Copy link

z1yuan commented Aug 22, 2019

I've wrapped nvm setup in a function, so it's lazy-loaded but seamless to use. Give it a shot!

https://github.com/ravron/dotfiles/blob/2093bb4b257db221f31fa900cfc8cd394476a7cd/.bashrc#L233-L243

This solution works for me and do speedup my WSL, thank you! I pick the function from your bashrc, anyone else could just paste them into your own to let it go.

nvm() { 
  export NVM_DIR="$HOME/.nvm"
  [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
  nvm "$@"
}
@SliceThePi
Copy link

@z1yuan I did something similar, but there's no reason to source the nvm script separately each time. Here's an alternative script; it's just as effective at speeding up your computer, and you still get bash-completion even before the command references the real nvm! Note: I'm not sure if you actually need to have unset nvm, but I didn't really care enough to check, as it's not hurting anything.

export NVM_DIR="$HOME/.nvm"
nvm() {
  unset nvm
  [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
  nvm "$@"
}
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion
@z1yuan
Copy link

z1yuan commented Sep 3, 2019

@z1yuan I did something similar, but there's no reason to source the nvm script separately each time. Here's an alternative script; it's just as effective at speeding up your computer, and you still get bash-completion even before the command references the real nvm! Note: I'm not sure if you actually need to have unset nvm, but I didn't really care enough to check, as it's not hurting anything.

export NVM_DIR="$HOME/.nvm"
nvm() {
  unset nvm
  [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
  nvm "$@"
}
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion

@SliceThePi I have tesed your script by adding them to my .bashrc. It speed up for sure, but it seems like i have to execute nvm to active it again, is it the same result as mines? Do you mean with your script we can speed up and active nvm/npm/node by default?

z1yuan@N-20HEPF1E0N75:~$ node --version
Command 'node' not found, but can be installed with:
sudo apt install nodejs
z1yuan@N-20HEPF1E0N75:~$ npm
: not foundram Files/nodejs/npm: 3: /mnt/c/Program Files/nodejs/npm:
: not foundram Files/nodejs/npm: 5: /mnt/c/Program Files/nodejs/npm:
/mnt/c/Program Files/nodejs/npm: 6: /mnt/c/Program Files/nodejs/npm: Syntax error: word unexpected (expecting "in")
z1yuan@N-20HEPF1E0N75:~$```

@SliceThePi
Copy link

@z1yuan Yeah, I noticed that. I ended up adding the same thing for node, and a separate load_nvm thing

export NVM_DIR="$HOME/.nvm"
load_nvm() {
  unset nvm
  unset load_nvm
  unset node
  [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
}
nvm() {
  load_nvm
  nvm "$@"
}
node {
  load_nvm
  node "$@"
}
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm

It's still not perfect (doesn't automatically get global node cli stuff into your PATH if I recall correctly) but it gets the job done a little better.

@jacobq
Copy link

jacobq commented Sep 17, 2019

I spent some time trying to work around this problem on an RPi today and thought I ought to share some data in case it is helpful: strace_nvm_on_rpi4.zip

  • Since running as root I was able to slightly improve performance by increasing priority:
    time ionice -c 1 -n 0 nice -n -20 sh $NVM_DIR/nvm.sh # outputs ~0.8s instead of ~1.2s
  • Running from tmpfs (i.e. copying $HOME/.nvm to /tmp/.nvm_home at start-up and setting $NVM_HOME to match) didn't seem to improve anything
  • strace -T -ttt -o strace.out sh $NVM_DIR/nvm.sh
  • perl strace_analyzer_ng_0.09-jrq.pl strace.out > analyzer_output.txt
  • Notable results:
    ----------------
    -- Time Stats --
    ----------------
    Elapsed Time for run: 0.971589 (secs) 
    Total IO Time: 0.899753 (secs) 
    Total IO Time Counter: 147 
       Percentage of Total Time = 92.606330% 
    ...
    Time for slowest read syscall (secs) = 0.551428 
       Line location in file: 288 
    ...
    
`head -n 300 strace.out |tail -n 40` # to see context around line 288
1568741598.613372 pipe([3, 4])          = 0 <0.000031>
1568741598.613458 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb6f97d18) = 5729 <0.000181>
1568741598.613723 close(4)              = 0 <0.000025>
1568741598.613811 read(3, "", 128)      = 0 <0.004820>
1568741598.618725 close(3)              = 0 <0.000029>
1568741598.618811 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 5729 <0.000031>
1568741598.618902 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5729, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
1568741598.618946 sigreturn({mask=[]})  = 5729 <0.000019>
1568741598.619039 wait4(-1, 0xbefd2c5c, WNOHANG, NULL) = -1 ECHILD (No child processes) <0.000019>
1568741598.619154 openat(AT_FDCWD, "/dev/null", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 3 <0.000038>
1568741598.619255 fcntl64(1, F_DUPFD, 10) = 12 <0.000019>
1568741598.619323 close(1)              = 0 <0.000018>
1568741598.619386 fcntl64(12, F_SETFD, FD_CLOEXEC) = 0 <0.000018>
1568741598.619451 dup2(3, 1)            = 1 <0.000018>
1568741598.619522 close(3)              = 0 <0.000018>
1568741598.619585 fcntl64(2, F_DUPFD, 10) = 13 <0.000018>
1568741598.619649 close(2)              = 0 <0.000018>
1568741598.619711 fcntl64(13, F_SETFD, FD_CLOEXEC) = 0 <0.000018>
1568741598.619775 dup2(1, 2)            = 2 <0.000018>
1568741598.619846 stat64("/root/.nvm/versions/node/v12.10.0/bin/npm", {st_mode=S_IFREG|0755, st_size=4615, ...}) = 0 <0.000032>
1568741598.619949 write(1, "npm is /root/.nvm/versions/node/"..., 49) = 49 <0.000019>
1568741598.620018 dup2(12, 1)           = 1 <0.000018>
1568741598.620083 close(12)             = 0 <0.000017>
1568741598.620145 dup2(13, 2)           = 2 <0.000021>
1568741598.620213 close(13)             = 0 <0.000018>
1568741598.620285 pipe([3, 4])          = 0 <0.000028>
1568741598.620364 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb6f97d18) = 5735 <0.000169>
1568741598.620612 close(4)              = 0 <0.000019>
1568741598.620701 read(3, "/root/.nvm/versions/node/v12.10."..., 128) = 34 <0.551428>
1568741599.172269 read(3, "", 128)      = 0 <0.009788>
1568741599.182188 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5735, si_uid=0, si_status=0, si_utime=54, si_stime=4} ---
1568741599.182249 sigreturn({mask=[]})  = 0 <0.000020>
1568741599.182331 close(3)              = 0 <0.000032>
1568741599.182414 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 5735 <0.000058>
1568741599.182556 clone(child_stack=NULL, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0xb6f97d18) = 5746 <0.000207>
1568741599.182877 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], 0, NULL) = 5746 <0.006748>
1568741599.189800 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=5746, si_uid=0, si_status=0, si_utime=0, si_stime=1} ---
1568741599.189847 sigreturn({mask=[]})  = 5746 <0.000019>
1568741599.189929 wait4(-1, 0xbefd2f0c, WNOHANG, NULL) = -1 ECHILD (No child processes) <0.000019>
1568741599.190052 dup2(11, 1)           = 1 <0.000022>

Also, though you probably all figured this out already:

Upon running the strace command the way @arcanis suggested, several trace_files (215) were created. Is this an expected result? If so, what exactly is @sunilmut expecting from this?

The -ff option causes a separate trace file to be written for each PID, so yes (man strace.1).


TL;DR Here's what I ultimately came up with: set path, etc. to a default node version then lazy load nvm with an alias as others have done (only tested on bash but seems to do just what I wanted):

# (updated .bashrc)

# Utility for removing an entry from $PATH -- copied from SO post:
# https://stackoverflow.com/questions/11650840/remove-redundant-paths-from-path-variable#answer-47159781
pathremove() {
    local IFS=':'
    local NEWPATH
    local DIR
    local PATHVARIABLE=${2:-PATH}
    for DIR in ${!PATHVARIABLE} ; do
        if [ "$DIR" != "$1" ] ; then
            NEWPATH=${NEWPATH:+$NEWPATH:}$DIR
        fi
    done
    export $PATHVARIABLE="$NEWPATH"
}

export NVM_DIR="$HOME/.nvm"
DEFAULT_NODE_VERSION="v12.10.0"
load_nvm() {
    # TODO: ionice -c 1 -n 0 nice -n -20 cmd... ?
    [ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"  # This loads nvm
    [ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"  # This loads nvm bash_completion
}
#load_nvm

# This gets node & friends into the path but doesn't initialize nvm proper until needed
lasy_load_nvm() {
    export NVM_BIN="$NVM_DIR/versions/node/$DEFAULT_NODE_VERSION/bin"
    export PATH="$NVM_BIN:$PATH"
    export NVM_CD_FLAGS=""
    alias nvm="echo 'Please wait while nvm loads' && unset NVM_CD_FLAGS && pathremove $NVM_BIN && unset NVM_BIN && unalias nvm && load_nvm && nvm $@"
}
lasy_load_nvm

So now, just after logging in, we can do this (assuming nvm install 10 was run previously):

$ node -v; echo $PATH; nvm use 10; node -v; echo $PATH
v12.10.0
/root/.nvm/versions/node/v12.10.0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Please wait while nvm loads
# ... time passes here ...
Now using node v10.16.3 (npm v6.9.0)
v10.16.3
/root/.nvm/versions/node/v10.16.3/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
@andrewrothman
Copy link

Switched to "n" and experiencing a noticeable speed increase when opening all new terminals.

@xploSEoF
Copy link

Over the last 4 years of using my WSL instance, I've noticed this slow-down and attributed it to two things, one of which is the NVM environment.

The other was my key-store, and was taking longer than 30 seconds to load. Debugging that highlighted a more generic problem: WSL does not have /tmp configured as a tmpfs mount - hence why @jacobq saw no improvement when trying to use /tmp for the NVM fix. See #6999 and https://superuser.com/questions/1170939/simulate-reboot-to-clear-tmp-on-the-windows-linux-subsystem/1656653#1656653 for further information.

This highlights a bigger issue - none of the normal tmpfs mounts have been configured, so slow downs and storage leaks are likely, and this could be why NVM has slowed over the years. Granted, /tmp isn't a tmpfs mount by default on most *nix environments, but all hardening guides insist it must be, and there are usually a couple of other areas that are tmpfs.

The lazy-load approach highlighted by @jacobq could be the best resolution for NVM, but it's only a patch hiding the real cause, whether that be a missing tmpfs mount or some other overlooked but fundamental part of *nix environments.

@quroom
Copy link

quroom commented Apr 5, 2024

Remove ~/.nvm all installed.
Use nvm installed with windows installer.
And then control node version in nvm with powershell.
It will fix all git bash loading performance issue.
Ofcourse you should remove script for nvm in .bashrc

@xploSEoF
Copy link

xploSEoF commented Apr 5, 2024

Remove ~/.nvm all installed. Use nvm installed with windows installer. And then control node version in nvm with powershell. It will fix all git bash loading performance issue. Ofcourse you should remove script for nvm in .bashrc

Nice suggestion, but having tried this, I've found it has quite the knock on consequences:

  • No longer able to use anything else running in WSL directly
  • Runs as Windows not as Linux, so things like paths, scripts, and even UI aspects are all in the Windows environment instead of WSL environment
  • NVM for Windows only allows one version of Node to be active at once, unlike standard NVM which allows each terminal to run its own version
@quroom
Copy link

quroom commented Apr 5, 2024

@xploSEoF Sorry about that, I just tested it in my git bash shell in window.
Because I just used it integrated in vscode.
But I faced somethow same performance issue in vscod git bash terminal.
I didn't really tried it with wsl, Sorry about that.
But I saw n should be alternative for nvm.
How about using it instead of nvm?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet