29

I've been using Windows and Mac OS for the past 5 years and now I'm considering to use Linux on a daily basis. I've installed Ubuntu on a virtual machine and trying to understand how I can use Linux for my daily job (as a js programmer / web designer).

Sorry for the novice question but it occurs to me that sometimes when I install a program through make config & make install it changes my system in ways that is not revertible easily. In windows when you install a program, you can uninstall it and hopefully if it plays by the book there will be no traces of the program left in the file system or registery, etc. In Mac OS you simply delete an App like a file.

But in Linux there is apt-get and then there is make. I didn't quite understand how I can keep my Linux installation clean and tidy. It feels like any new app installation may break my system. But then Linux has a reputation of being very robust, so there must be something I don't understand about how app installation and uninstallation affects the system. Can anyone shed some light into this?


Update: when installing an app, its files can spread anywhere really (package managers handle part of the issue) but there is a cool hack around that: use Docker for installing apps and keep them in their sandbox, specially if you're not gonna use them too often. It is also possible to run GUI apps like Firefox entirely in a Docker "sandbox".

7
  • 11
    As a simple user you're supposed to use apt-get rather than make to install software. make install is used when you need to build the latest (possibly unstable) version of a software from sources, which is not yet available as a package. Commented May 15, 2015 at 0:32
  • @DmitryGrigoryev Using apt is simpler and provides a better tui than using apt-get.
    – Bakuriu
    Commented May 15, 2015 at 6:35
  • 3
    When I used OS X, I often found deleting the *.app file insufficient, as application installations often littered other places (e.g. the Library directory, from memory). Also, if you manually build from source in Ubuntu with make install, use checkinstall instead to allow easy removal.
    – Sparhawk
    Commented May 15, 2015 at 7:29
  • 1
    Don't use the ./configure ; make ; make install way. All you need is learning the fabulous fpm tool. Commented May 15, 2015 at 8:59
  • what is the fpm tool?
    – AlexStack
    Commented May 15, 2015 at 9:00

9 Answers 9

28

A new install will seldom break your system (unless you do weird stuff like mixing source and binary).

If you use precompiled binaries in Ubuntu then you can remove them and not have to worry about breaking your system, because a binary should list what it requires to run and your package manager will list what programs rely on that program for you to review.

When you use source, you need to be more careful so you don't remove something critical (like glib). There are no warnings or anything else when you uninstall from source. This means you can completely break your machine.

If you want to uninstall using apt-get then you'll use apt-get remove package as previously stated. Any programs that rely on that package will be uninstalled as well and you'll have a chance to review them.

If you want to uninstall then generally the process is make uninstall. There is no warning (as I said above).

make config will not alter your system, but make install will.

As a beginner, I recommend using apt-get or whatever distro you use for binary packages. It keeps things nice and organized and unless you really want to it won't break your system.

Hopefully, that clears everything up.

1
  • 3
    For traceless uninstall you will of course use the --purge option with apt-get Commented May 15, 2015 at 21:30
17

In theory, make uninstall should remove what make install added and your system not accumulate cruft. Problem, of course, is that not all makefiles are created equal.

Some may omit the uninstall rule, leaving it to you to figure out what the install rule did. Worse, if install rule overwrote a linked library, dumb uninstall routine may break the dependencies for some other program.

Best solution for source installs is to use different prefix than the packages installed by the system's packaging manager. Apt installs files to /usr/ so use /usr/local/ hierarchy for your source installs. That makes it a lot easier to keep track of what files belong to which packages and uninstalls won't break the system.

./configure --prefix=/usr/local works for many configure scripts. If not, you can edit Makefile manually. Or just copy the files manually.

Apt and other packaging managers keep track of what files they've installed and their reverse dependencies so their uninstall functions are safe to use.

15

I would recommend you you use apt-get install to install any package in linux and apt-get remove (package name) or apt-get purge (package name) which will remove not only the main package that you are want to uninstall but all the associated packages or dependencies that were installed during the installation.

Now, to keep your system cleaner I'd recommend you to use apt-get clean https://askubuntu.com/questions/144222/how-do-apt-get-clean-and-apt-get-clean-all-differ#144224 (this post is interesting about that) which will remove all the files that were downloaded during installation but are not longer needed.

Another command that would be useful if you want to remove all the dependencies that are installed in your system but they weren't removed when you uninstalled is apt-get autoremove.

If you install a package via make and make install you'll be responsible for uninstalling it yourself (maybe there's a README file included in the downloaded package that tells you how to do it) as well as trying to uninstall all the dependencies associated with it. That's why is always recommended to install packages in Linux that are offered by the package manager of the distro, if you do it this way you can be sure your package has been tested enough to work with the distro (flavor of Linux) you are using and is very unlikely to break your system. Also, you can be sure your package will get updated when needed whereas if you install it by yourself you are the responsible to do all this.

I hope this helps :)

11

The normal way to manage installed applications under Linux is with a package manager. The choice of package managers is one of the main things that differentiate distributions. Ubuntu, like Debian (which it is based on), uses dpkg and APT; most of the time, you only need to interact with one of the interfaces to APT, such as apt-get (command line), aptitude (command line or text mode) or Synaptic (GUI).

A package manager keeps track of which files belongs to which installed program. Like on Windows, programs can execute arbitrary code as part of their installation or uninstallation procedure, but are usually well-behaved and won't break other programs. Furthermore the (un)installation code is written by the package maintainer, not by the upstream author (for packages in the main distribution). Unlike Windows, there is a unified interface to installation, upgrade and uninstallation: the package manager. You don't need to search for the uninstaller (if there is one), you just click the “uninstall” icon in the graphical package manager, or run apt-get remove PACKAGENAME.

If you need “exotic” software, you may need to install it manually, either by unpacking an archive or by compiling from source. Installers that come in the form of an executable program are rare in the Linux world. Running make install tends to spread each program over several directories (/usr/local/bin, /usr/local/man, /usr/local/lib, etc.). To keep things sorted, I recommend using a “poor man's package manager”, such as stow. With stow, each package is installed in its own directory, and the stow utility takes care of creating symbolic links so that the commands installed by the package are in the command search path and so on. See Keeping track of programs for more details.

3
  • this "exotic software" can be written by anyone right? How this "exotic software" becomes officially available in the distros? Does anyone review their source code line by line? How does Ubuntu for example decide to include a software in its APT-GET command and ignore another?
    – AlexStack
    Commented May 15, 2015 at 1:05
  • 1
    @AlexStack Most distributions are made by volunteers. Ubuntu is sponsored by Canonical, which pays a few people, but still the bulk of the package maintenance is done by volunteers. So the most accurate answer is that Ubuntu includes whatever software a volunteer decided to include. More precisely, most software in Ubuntu comes from Debian, so a Debian Developer had to decide the package was worth working on, and the software has to conform to the policy (acceptable license, not too buggy). Commented May 15, 2015 at 1:20
  • @AlexStack there's no guarantee that anyone has reviewed a particular piece of software line-by-line, even if that software is available in the Ubuntu repositories (i.e. through a default installation of apt-get or the like). But they only put reasonably popular programs in the repositories, those that have enough users to be confident that they basically do what they are supposed to do.
    – David Z
    Commented May 17, 2015 at 6:57
8

Almost every distro has its own choice of package manager, there are several popular-ish ones. pacman, apt, rpm, emerge, ... debian-based distros use apt.

The doc looks daunting but it's not actually all that hard to make .debs for local use, just stay on task.

8

You should try to use your package manager (apt-get, aptitude, synaptic, or aptdcon, software-center, mintinstall, ..) if at all possible. Using a make task for installing is very raw, not guaranteed to have an uninstall counterpart and not guaranteed to play well with the rest of the system (It's just a script tied into make's build system -- and unlike a reviewed package, make tasks can contain any executable code, potentially malware).

If you don't find a packaged version of the software you need, you might find checkinstall (checkinstall make install) helpful.

3

I'm no expert, and don't know much about installing software from source, but using apt-get, you can remove installed software with apt-get remove package-name. To remove all the configuration files too, use apt-get purge package-name. The safest way to keep your Linux installation tidy is to only use packages in the official repositories. When a package is need that's not in the official repositories, it can often be found (since you're using Ubuntu) in a PPA.

1
  • apt-get is no use to the OP who has been using configure and make install Commented May 15, 2015 at 8:57
3

As the other answers say, nowadays it's typical to install the vast majority of your software using your distro's package manager of choice. This is so convenient that you will probably miss it when you go back to Windows! In a sense, the various "markets" and "stores" are going in that direction also for commercial OSes.

Having said that, I remember that when I first started learning about Linux I was puzzled by the way software is typically installed. While on Windows all the files go in a single directory under c:\Programs, the traditional "unix way" is to scatter them around in "standard locations" (the details are not that standardized, [have a look at LSB for more information][1]), such as /usr/local/bin for executables, /usr/local/doc for documentation and so on.

In a sense, Windows "doesn't know" where your executables are. It knows that they are "somwhere under c:\Program Files", but not much more. Scanning all those directories to find them is, or used to be, prohibitively expensive. So a link to the executable would be explicitly placed in a known location (the start menu), and that's what you would use to start it.

On Unix/Linux, your shell, and most other programs for that matter, will automatically look for executables or other resources in a known set of locations. So that's why, by just copying your files in their appropriate directories, you will automatically "see them", without having to "register" them anywhere for the users to know about them.

Both mechanisms have their pros and cons, but you'll find that the Unix approach is typically more flexible.

Please keep in mind that there are plenty of exceptions and details that make the picture not as clear cut as I described it, but I think this kind of introduction can be useful for newbies to at least understand the basic logic behind it.

3
  • thanks @unclezeif. That issue with "files scattered all around the place" is really bugging me because I don't understand it. You said the Unix approach is typically more flexible. Can you elaborate please?
    – AlexStack
    Commented May 15, 2015 at 11:08
  • For instance, the documentation is all in one place, the icons are all in one place, etc. which in some cases is really nice! By more flexible I mean that, being all path based, you can do things like: spawning a new shell where the environment variables are changed so that you only see the executables in a certain directory, thus limiting (or expanding) your choice. It's all very "simple" as you only use files and environment variables to achieve a great degree of customization.
    – UncleZeiv
    Commented May 15, 2015 at 14:26
  • Another example is: in principle you can install a program in your home directory, e.g. /home/foo/bin, and just add /home/foo/bin to your path environments, without touching the shared system.
    – UncleZeiv
    Commented May 15, 2015 at 14:29
3

I think the best advice is just in this forum post. Here are your options (2 and 3 are more-or-less the same in terms of effect, really):

  1. Use a package manager and a repository. That means you get updates, you get official releases, signed releases, etc. etc. etc.
  2. If you can't or won't use a package from a repository, build a package for the software and install that using your package manager. Detailed instructions for doing this on Debian-based systems are in the post linked above. It looks scary at first, but it's really quite simple, and especially in the case of Debian there are plenty of scripts out there to do all the hard work for you.
  3. If you can't get that method to work, use checkinstall as others have recommended. It's a very simple drop-in replacement to make install:

    $ ./configure
    $ make
    $ sudo checkinstall
    

    This should build the software as normal, and then run make install in a confined environment that tracks what it does and builds a package that would do exactly those things. Then it installs that package with your package manager. Removing then is just like removing any other package, just as in (2).

  4. If you can't or won't use a package manager, well, then use make install, I guess. And hope the software maintainer maintains an uninstall routine.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .