[philiptellis] /bb|[^b]{2}/
Never stop Grokking


Showing posts with label sysadmin. Show all posts
Showing posts with label sysadmin. Show all posts

Sunday, April 22, 2007

Upgrading gnome on FreeBSD 4.x

...or how to end up with bloodshot eyes

Let me get this out right at the start. Current versions of Gnome (2.18) can be made to build on FreeBSD 4.x, but the default ports configs won't. You'll have to do a lot to get it working, and sometimes, just give up and skip a package.

I'm not going to list everything mainly because I didn't keep notes and I don't remember everything that I did. I'll list here the most memorable issues and what I did to fix them.

1. x11/gnome-applets

The gnome2 meta package depends on a whole load of other packages, and gnome-applets is the first of theses. It pulls in several applets, and tries to link itself against hal and gstreamer-plugins. Problems all along.

For starters, one of the packages it tries to pull in is gtop (from devel/libgtop). So? So this is something that reads the kernel's process table, and to do that, it uses struct kinfo_proc from /usr/include/sys/user.h. Not such a big deal until you realise that this struct has changed completely from FreeBSD 4.x to FreeBSD 5 and 6. So, I started to fix the code by looking up proc.h and user.h for equivalent struct members. This process was slow and painful, and I couldn't guarantee correctness.

Then I thought to myself and realised that I've never in my life run gtop. I always opena shell and run top. Discard package.

So I opened x11/gnome-applets/Makefile, and removed the line that said:
gtop-2.0.7:${PORTSDIR}/devel/libgtop \

First problem solved.

Second problem was building against hal (sysutils/hal), which has this little snippet in its Makefile:
.if ${OSVERSION} < 505000
IGNORE=  not supported on FreeBSD prior to 5.5-RELEASE
.endif
Which basically says that hal won't build on FreeBSD less than 5.5. This was easily solved though, just append --without-hal to CONFIGURE_ARGS and you're done. The third issue - multimedia/gstreamer-plugins had a similar issue:
.if ${OSVERSION} < 500000
IGNORE=        many plugins don't build or even work on 4.x
.endif
I was feeling a little ambitious here, so I said, let's see what doesn't build, and I commented out those lines from multimedia/gstreamer-plugins/Makefile, and then started to build the plugins. Guess what? All of them built. Got back to gnome-applets, and hit another snag. This time it was gnome-control-center, which needed libgnomekbd, which depends on libxklavier. Nothing particularly wrong with this, except that libxklavier has include files installed in two locations, and one of them is wrong. Basically, you have /usr/local/include/libxklavier (correct) and /usr/X11R6/include/libxklavier (wrong). My guess is that the incorrect one came from an older install. Problem is that X11R6 shows up earlier than local in the include path, and so libgnomekbd tries to use those headers rather than the ones in local. Compile fails. Looked up on the net, and again found a solution that said "rm -rf /usr/X11R6/include/libxklavier". I tried the more circumspect "mv libxklavier old-libxklavier" instead. Retried, and libgnomekbd built. Returned to gnome-applets and everything started building, except...

2. documentation and help files weren't getting generated

I was getting errors with xmlPathToURI:
Traceback (most recent call last):
File "/usr/local/bin/xml2po", line 34, in ?
import libxml2
File "/usr/local/lib/python2.4/site-packages/libxml2.py", line 1, in ?
import libxml2mod
ImportError: /usr/local/lib/python2.4/site-packages/libxml2mod.so:  
Undefined symbol "xmlPathToURI"
A web search told me that I should rebuild textproc/py-libxml2, so I did that. It went smoothly, and fixed the problem, but another problem showed itself. I started getting a Bus Error on calls to xsltproc - also used for generating gnome docs. Did a pkgdb lookup on /usr/local/bin/xsltproc, which told me that it came from libxslt. I proceeded to rebuild that, and once done, docs and help files started getting generated.

3. devel/gnome-vfs

I also needed to build things like abiword, gnumeric, gnome-terminal, eog and gthumb, and all of these require gnome-vfs, which also depends on hal. This was another quick fix.I commented out these line:
LIB_DEPENDS=   hal.1:${PORTSDIR}/sysutils/hal
...
--enable-hal \
--with-hal-mount=/sbin/mount \
--with-hal-umount=/sbin/umount
and rebuilt gnome-vfs. All okay, except bookmarks won't automatically be monitored. Can't say that I care.

4. Photos

Now remember, one of the things I needed was gthumb, and I wanted it to be able to read directly from my camera using libgphoto2... unfortunately, libgphoto2 requires libgphoto2_port which requires libusb, which, for some reason, didn't build from ports the last time I'd tried. IAC, I'd installed all three from source, and they worked perfectly for command line access to my camera. Problem is that the source install of libgphoto2 put its pkg-config file into /usr/local/lib/pkgconfig, while FreeBSD's pkg-config reads from /usr/local/libdata/pkgconfig, and because of this, gthumb kept assuming that libgphoto2 couldn't read from a camera. I added the directory to PKG_CONFIG_PATH and reconfigured, and it built correctly.

6. nautilus

Now this is something I couldn't understand, but I managed to fix anyway. A few things stopped me while building nautilus. First, I got stuck building nautilus-shell-interface.idl on typedef sequence<URI> URIList; because it couldn't find a definition for the type called URI. All the correct files were included, so not sure what the problem was. I noticed that URI was a typedef of string, so just changed the line to typedef sequence<string> URIList; and it worked. The next thing that came up was a little more complicated. Basically, src/nautilus-application.c has a call to CORBA_sequence_Nautilus_URI_allocbuf, but this function isn't defined anywhere, and apparently, I'm not the only person who has this problem. I took a hint from the guy at that URL, and decided to allocate the memory myself, but I had to make a few guesses on what it did. Read through the code, and realised that uri_list->_buffer was an array of URIs, and URIs were basically strings, so I changed the code from this:
uri_list->_buffer = calloc(length, sizeof(CORBA_string));
to this:
uri_list->_buffer = CORBA_sequence_Nautilus_URI_allocbuf(length);
The code compiled, and linked successfully, and I haven't had any lockups at all.

Finishing up

At the end of it all, I discovered that ayttm wouldn't start because it couldn't find libgmodule.so. Tried rebuilding, but it couldn't link, so I ended up installing the new gtk+20 package, and then ayttm built, but... Now gnome-terminal started freezing up. Rebuilt gnome-terminal against the new gtk+ and everything has been smooth since... except for the missing icon in the top left corner of my screen and the gnome dictionary applet, which won't start. Will investigate further. Update: 2007/04/23 I've had a few more problems since last night. Minor ones. Basically, while building gdm, the xml parser was barfing on &mdash; and &percnt;:
uk/gdm.xml:2358: parser error : Entity 'mdash' not defined
<para>%r &mdash; випÑÑк (веÑÑÑÑ OS)
^
uk/gdm.xml:2360: parser error : Entity 'percnt' not defined
<para>%s &mdash; назва ÑиÑÑеми (ÑобÑо ÐиÑÑеми (ÑобÑо Ð
Again, a web search found others who've had this problem, but there wasn't a solution, just a note that it doesn't break anything. I decided to fix it with this little script:
cd work/gdm-2.18.1/docs
perl -pi -e 's/&mdash;/\&#8212;/g' `grep -rl '&mdash;' *`
perl -pi -e 's/&percnt;/\&#37;/g' `grep -rl '&percnt;' *`
and all's well. Update: 2007/05/10 One of the packages I hadn't upgraded the last time around was deskutils/gnome-utils. This contains a bunch of useful applets like GDict, which I use a lot. I started building it, and ran into the old libgtop problem. This time, just commenting out the entry from the makefile didn't help. The configure script cried out as well. So, I did this:
cd work/gnome-utils-2.18.1
vi gnome-utils-2.18.1
Search for LIBGTOP_REQUIRED, and changed the version from 2.12.1 to 2.10.0 (the version I have installed). No more complaints about libgtop, but... While building, the compile stopped somewhere inside baobab because one of the files included monetary.h. This header does declares strfmon, and little else. It also doesn't exist.

I checked the code, verified that there were no calls to strfmon, and removed the reference to this header. The compile proceeded smoothly, and I now have gnome-utils installed.

Thursday, October 19, 2006

Selectively enable network interfaces at bootup in linux

Do you have multiple network interfaces on your linux box and find yourself needing to have not all of them active at bootup? Perhaps not all networks are connected and you don't want to waste time with attemtps at network negotiation for a connection you know isn't available.

I've faced this situation a couple of times, and came up with a way to tell my startup scripts to skip certain interfaces via kernel commandline parameters (which can be specified via your boot loader).

It's so simple that I often wonder why I (or anyone else) hadn't done it before. It's likely that everyone else on earth answered no to my question above.

Anyway, this is what I did:

In /etc/init.d/network, in the loop that iterates over $interfaces:

# bring up all other interfaces configured to come up at boot time
for i in $interfaces; do
after we've eliminated non-boot interfaces:

if LANG=C egrep -L "^ONBOOT=['\"]?[Nn][Oo]['\"]?" ifcfg-$i > /dev/null ; then
# this loads the module, to preserve ordering
is_available $i
continue
fi
I add this:

# If interface was disabled from kernel cmdline, ignore
if cat /proc/cmdline | grep -q $i=off &>/dev/null; then
continue;
fi

Add the same for the next loop that iterates over $vlaninterfaces $bridgeinterfaces $xdslinterfaces $cipeinterfaces and you're done. As simple as that.

Now, when your boot loader pops the question, you choose to edit the kernel command line, and add something like eth0=off to prevent eth0 from coming up at boot time. You could turn this into an alternate boot config in grub by adding an entry in /boot/grub/grub.conf like this:

title Linux, skip eth1
root (hd0,1)
kernel /vmlinuz-2.6.10 ro root=LABEL=/ rhgb quiet eth1=off
initrd /initrd-2.6.10.img

Which will give you a nice menu option below your default Linux option saying Linux, skip eth1.

You can always enable your interface later by doing /sbin/ifup eth1.

Note: You may need to add is_available $i inside the if block. I don't know, and it works for me without it.

Thursday, December 29, 2005

Using the Samsung SP0802N and ASUS K8V-MX together without lockups

Early this year, I started having hard disk problems (signs of an impending crash), and the decision was to replace my old samsung 40Gig with a new samsung 80Gig. The drive installed was a Samsung SP0802N - since I'd heard mostly good reviews of it. I decided to keep both hard disks connected though, just in case.

A few months ago, the computer started showing signs of corrupted RAM. This isn't something that normally happens on two year old RAM. 2 day old RAM, maybe, 10 year old RAM, maybe, but not 2 year old RAM. Power problems are a possibility, and that's not unexpected in my room. Anyway, the system was checked by a hardware guy, and he said that the motherboard needed to be replaced.

The new motherboard was an ASUS K8V-MX and along with that, we got an AMD Semprom processor.

On my next trip back home, I noticed problems with the system. It was running slower, and was locking up on disk intensive processes. A power cycle was required to get it back, and then there was a high chance that BIOS wouldn't recognise my disk, but would grab grub from my old disk. I didn't have time to look at it back in October or November, but in December, I did.

Three things came to my mind.
- bad power,
- bad hard disk/disk controller
- incompatibility somewhere.

We thought the grounding might be bad throughout the house because the stabiliser and spike buster indicated the same at various outlets. I also read through the motherboard manual. I generally do this before installing a new motherboard, but since I hadn't installed this one, I hadn't read it before. The manual said that a BIOS upgrade was required to function correctly, and that MSDOS and a floppy was required to upgrade the BIOS. I had neither, so ignored that for the moment.

Decided to go get a new hard disk and a UPS, but changed my mind about the hard disk at the last moment, and got just the UPS and some more RAM.

The night before I bought the stuff, I moved the PC to a different room to check (I couldn't get it started in my bedroom), and it started up (which further convinced me that it could have been a power problem). I read through /usr/src/linux/Documentation/kernel-parameters.txt for info on what I could do to stabilise the kernel. That pointed me to other docs, one of which told me that a BIOS upgrade was required for certain ASUS motherboards.

Today, I decided to try upgrading the BIOS. I do not have a floppy drive, or MSDOS, so that was a problem. Booted up from the motherboard CD, which started FreeDOS. FreeDOS, however, only recognises FAT16 partitions, and I had none of those.

Switched back to linux, started fdisk, and tried to create a new FAT16 partition 5MB in size. It created one 16MB in size - I guess it's a least count issue. Had to zero out the first 512 bytes of the partition for DOS to recognise it...

dd if=/dev/zero of=/dev/hda11 bs=512 count=1

Then booted back into FreeDOS and formatted the drive:
format c:

Then booted back into linux to copy the ROM image and ROM writing utility to /dev/hda11, and finally, back to FreeDOS to run the utility.

Ran it, and rebooted to get a CMOS checksum error - not entirely unexpected. Went into BIOS setup and reset options that weren't applicable to my box (no floppy drive, no primary slave, boot order, etc.)

Booted into linux and haven't had a problem yet.

Next step - enable ACPI.

Friday, May 13, 2005

Using your iPod through FreeBSD

If you're a die hard linux or FreeBSD fan, or are just a geek and happen to have an iPod, then you have an opportunity to play. Getting your iPod to work with FreeBSD or linux can bring you much glee. Unfortunately for you, I'm going to shatter your hopes and tell you how to do it.

This howto is about FreeBSD 4.10 and a USB iPod shuffle, because that's what I have. Other kinds of iPods will also work without change in the procedure. If you have an older FreeBSD, it may or may not work this way, if you have a newer FreeBSD, it will work better.

Linux should be far simpler, and there are already instructions on the net, so I won't cover that.

So, to start with, you've got to make sure your kernel supports a few things.
- usb (if you have a USB iPod, but put it in anyway)
- ohci/uhci/ehci (the first two for USB 1.x, depending on the type you have, the latter for USB 2.x, put all three in, it won't cause problems)
- firewire (if you have a firewire iPod)
- sbp (serial bus protocol)
- scbus, da, cd, pass
- umass (mass usb storage)
- msdos file system support

To do this, edit your kernel config file, which should be in /usr/src/sys/i386/conf/KERNELNAME or something like that. uname -a will tell you for sure.

Add the following to the end of the file:

# For the iPod
options MSDOSFS
device sbp
device firewire
device scbus
device da
device cd
device pass
device uhci
device ohci
device ehci
device usb
device umass


Of course, check in advance that these aren't already in the file.

Once that's done, reconfigure and rebuild your kernel:

config KERNELNAME
cd ../../compile/KERNELNAME
make depend
make
make install


Also tell your system to load msdos file system at boot up. In /boot/loader.conf, add this:
msdos_load="YES"

Next, install gtkpod. You can portinstall it. Get version 0.88 at least if you need support for the iPod shuffle.

Now, plug in your iPod. The USB iPod should throw some messages into dmesg saying that the Apple iPod is loaded on bus:0, target:0 and lun:0 (or something else). Take note of these numbers.

Now, you can mount the drive as msdos. It should be attached to /dev/da0s1 or something like that. You'll know for sure from the dmesg messages. Run dmesg, and note whether the iPod is on da0 or da1 or something else. That's the drive your iPod is attached to. It will always be s1, ie, the first BIOS partition, known in FreeBSD as slice 1. On linux it could be as simple as /dev/sda1

Create a mount point for the drive:

mkdir /mnt/ipod

Mount the drive with the msdos filesystem - assuming this is a windows formatted iPod. If you have a mac formatted iPod, you'll have to do one of the following:
- Add HFS support to your kernel (4.10 may not have this support), or
- Reformat it as a windows iPod using windows, mac or pc-fdisk on FreeBSD/Linux. I wasn't very successful running pc-fdisk on the drive in FreeBSD, but that's prolly because of a bad USB implementation in 4.10

You'll find instructions about formatting at the gtkpod site.

To mount use this:

mount -t msdos /dev/da0s1 /mnt/ipod

Now run gtkpod, and follow its interface.

Once you're done with gtkpod, quit, and unmount the drive. The orange light might still blink, indicating that the drive is not ready to be removed. Use camcontrol to eject it:

camcontrol eject 0:0:0

The numbers at the end are your bus:target:lun numbers, so use the ones you noted down at the start.

Once you've done that you should be able to unplug the iPod and use it.

That's it. Have fun.

Thursday, February 24, 2005

Security in Linux

(from the linux security FAQ)

Glossary:

Cracker
someone who gains unauthorised access to a system. Not to be confused with a hacker. A hacker is really someone who like to play with computers and write good software. The media often tends to confuse the two. Hackers create, Crackers break.
IDS
Intrusion detection system. A system that tries to detect if your system has been compromised, and warns you of it.
Tripwire
A kind of IDS that checks whether critical system binaries and configuration files have been modified or not.
Firewall
a system that filters traffic moving from outside the network to inside, and vice-versa.
Port scanner
a program that checks a host to see which ports are open for external connections. It generally does a blind connect on all ports of a host. Some port scanners can do stealth scanning.
Security scanner
a program that checks a host for known vulnerabilities. Security scanners generally try to exploit a vulnerability without causing any harmful effects that would happen in a genuine break in. Some exploits are desined to crash a system, and in these cases, the security scanner may well have to crash a system if it is vulnerable. It is better though to be crashed while scanning your own system, than when someone is actually trying to crash you.

Introduction

Some of the most common questions asked by people trying to secure their linux systems are: What is security? How can I protect myself from a break-in? How secure should my linux box be? Which is the most secure linux distribution?

Security, in a nutshell, is ensuring the Confidentiality, Integrity, and Availability of your systems and network.

In short, to protect yourself, you should install the most recent security patches for your distro, turn off unused/unrequired services, run the others through tcpwrappers, and instead of telnet/ftp for remote access, use a secure alternative. The rest of this document will attempt to cover in more detail how to go about securing your linux system.

Most important is to decide how secure you need to be. You need to assess the risk to you, and base your security on that.

risk=(threat*vulnerability*impact)

Threat: the probability of being attacked
Vulnerability: how easy is it to break in
Impact: what is the cost of recovering from an attack

You cannot be 100% secure because there will always be security holes that you either do not know about, or are infeasible to patch.

When picking a distribution with security in mind, you should really pick one that has secure default values that you can tweak later. There's no point in installing a system that someone breaks into before you even have a chance to secure it.

Distributions like Secure Linux and slinux aim to set secure defaults.

Most distros do not have secure defaults because this tends to make the system hard for end users to use. Securing a system is really a trade-off between convenience to your users, and protecting their data.

In general, never rely on the default installation of any distribution. Consult the Linux Administrator's Security Guide for information on how to secure specific distributions.

Alternately, OpenBSD, was designed from the ground up as a secure unix, and is probably your best choice for a pure unix implementation. OpenBSD servers and firewalls are extremely secure.

A good idea would be to set up a rather open internal network, with tight security between the inside and the outside. That way, local users still have all the convenience, while the system is secure from an external threat. There are still two problems with this approach.

If you have legitimate users who need to connect to your system remotely, they would be inconvenienced by your external security. This shouldn't be an issue, as opening up your system to one person, can really open it up to the world.

On the inside too, if your users cannot be trusted, then lax internal security could hurt you. Your users could compromise your system by simply not setting good passwords, or leaving their terminals logged in while they are away. There have been cases when crackers have walked into offices, and found system passwords pasted on the office bulletin board for everyone's convenience. Although hitherto unheard of in India, companies abroad have been known to place spies in competitor's companies to steal corporate secrets. There's no use in having the ultimate in network security if your employee is simply going to copy all your secrets onto a floppy and walk out with it.

Apart from securing each computer system, and the network as a whole, one also needs to physically secure the entire installation.

Firewalls

To protect your network, you'd use a firewall between your internal network and the rest of the world.

A firewall set up is basically a set of rules that tell the firewall whether a given packet is to be allowed through or not. It can also log information on packets passing through, as well as modify or redirect these packets.

Setting up a firewall is very well explained in the linux firewall howto.

In general, you will require to configure IP Chains, IP Filter or IP Tables depending on whether you have a 2.2, 2.4 or 2.6 kernel.

A firewall is indispensible to the security of a network. Whether it is a dedicated machine or running as a service on another makes a difference.

Since a firewall is meant to filter traffic to and from your network, you ideally want it to sit between your network and the rest of the world. Your firewall would have two network interfaces, one of which connects to your network, and the other to the world.

Firewall rules decide which packets get from one side to another.

A firewall is generally implemented at the kernel level, and can be fast provided it works completely in memory and does not have too many rules. Ideally, you only want your firewall to filter IPs, and let a higher level service handle service based filtering, for example, have tcpd check if anyone is trying to connect to restricted ports on your system, or use a proxy based system to restrict websites that your users may visit. Better logging can be done at these levels, and they are less demanding on the kernel.

Services

Run only the services that you require and no more. On a desktop system, which you will not access remotely, there should not be any services. Run different levels of services on different machines.

You can find out which services are running by using the ps and netstat commands:

ps auxfw will show you a tree structure of processes running, while netstat -ap and netstat -altpu will show you which processes are listening on network ports.

You may also want to do a port scan of your machine using a tool like nmap (remember, Trinity used it in the Matrix Reloaded), or a security scanner like nessus.

Some really unsafe services include rsh, rcp, rexec. Many versions of sendmail and bind have well known security holes. Also disable echo, discard, finger, daytime, chargen and gopher if you don't use them.

Wherever possible, use an encrypted protocol rather than a plain text protocol. for example, use ssh instead of telnet/rsh, use scp instead of ftp, use IMAP w/SSL instead of POP3.

On a single user system, you should also disable identd, but on a multiuser system, this is a good way of tracking down problem users on your system.

You also want to use tcpwrappers to start your services. tcpwrappers are basically an intermediate between inetd and the service that actually serves a connection, like say telnet. Tcpd will check to see if the connecting host is allowed to connect to this service. Different kinds of access control and logging can be done through tcp wrappers.

TCPWrappers

TCPWrappers, and their associated configuration files /etc/hosts.deny and /etc/hosts.allow help a system administrator set up good access control for his system.

First, some background. Most unix systems use what is called a super server to run other servers. The purpose of a superserver is basically to listen on all ports that you want people to connect to, and when a connection is made to that port, it spawns the relevant server. The advantage of such a set up is threefold.

Primarily, all these other servers do not need to implement socket io routines. They simply communicate through stdio, and the superserver connects the socket's io streams to stdio before spawning a server.

Secondly, we keep our process table small by not running all servers all the time. Only one server runs all the time, and servers that are never required are never started. A server that is required is run only for the duration that it needs to serve a connection.

Finally, and really as a consequence of such a set up, we can implement security centrally, and have all servers benefit from it, even if they have no idea that it exists. In fact, these servers know nothing about security at all.

Now, in older systems, the superserver was inetd, or the Internet Daemon. In newer systems, it has been replaced with xinetd, which is simply an extended inetd. xinetd can implement security internally, while inetd spawns an external security handler, most commonly tcpd.

The configuration files for these servers are usually /etc/inetd.conf and /etc/xinetd.conf, /etc/xinetd.d/*. We aren't concerned too much about the contents of these files, except what services are started by it. Most commonly, the superserver will start services like telnetd, ftpd, rlogind, rshelld, rstatd, fingerd, talkd, ntalkd, etc. Many of these may not be required, and can be stopped. In inetd, this would involve commenting out the relevant line in inetd.conf, while in xinetd, this would involve setting disabled=yes in /etc/xinetd.d/service_name.

Disabling these services altogether will cause an inconvenience for your users. For example, you may want to allow nfs connects from certain hosts within your network, but disable it for everyone else. Furthermore, several services have well known exploits, and detecting when someone is trying these is a good early warning system for a possible attack.

This is where tcpwrappers, or tcpd (the tcp daemon) as it is known, comes in. TCPWrappers are basically wrappers around your services. It is implemented in two ways, either through the tcp daemon, which starts the requested service after doing access control checks, or through libwrap,
which may be linked into the server itself. Either way, the wrappers rely on the files /etc/hosts.{deny,allow}.

The full intent and use of tcp wrappers is well documented, and is shipped with all linux distributions. It can be found in /usr/doc/tcp_wrappers/* or /usr/share/doc/tcp_wrappers/*. Here I will outline the most important usage.
How exactly does tcpd come in to play?
Instead of directly starting the server, inetd can start tcpd, and tell tcpd to start the correct server after performing any checks, etc. that it wants. If one opens /etc/inetd.conf, one will find against the telnet and ftp lines that the daemon to be spawned is tcpd with
in.telned/in.ftpd as arguments.
telnet  stream  tcp     nowait  root    /usr/sbin/tcpd  in.telnetd

The other values on the line aren't important for that discussion, and you'll figure them out soon enough.

Now, in execve parlance, the first argument passed in the argument vector corresponds to argv[0], i.e., the name that the program should call itself. tcpd takes this hint, and calls itself in.telnetd (which is what will show up if you list running processes). It performs its checks, and then execs in.telnetd, passing all file descriptors on.

Thus, we have tcpd, which has a single access control file, doing checks for most daemons. Further more, since tcpd comes into the picture only while a connection is being estabilished, and leaves the scene thereafter, there is no overhead involved (except for that during checking, which is what we want).

Now, not all servers are started through inetd. Many, like sendmail, apache, and sshd, run as standalone servers. These servers can have tcpd compiled into them using libwrap.a and tcpd.h. They will then automatically check with hosts.allow and hosts.deny.

Now all these options must be selected while compiling tcpd and libwrap, but the defaults are decently secure anyway.

To check the configuration of your tcpd wrappers, use /sbin/tcpdchk. Give it the -v flag for more information.
The hosts.{deny,allow} files
Wietse Venema, the creator of tcpd, also developed a 'language' for specifying the access control rules that govern who can use which service.
These rules are specified in hosts.allow and hosts.deny. The normal strategy is to deny all connections, and explicitly allow only services that you want people connecting to. For eg: your hosts.deny would read like:
ALL: ALL 

This means deny all services to requests from all addresses.

Remember that hosts.allow is checked first, then hosts.deny. The first rule that matches is applied, so basically, if a match is not found in hosts.allow, it will be denied. If hosts.deny is empty or missing, then the default is to grant access. The extended acl language also allows deny rules to be specified in hosts.allow, so you really only have to manipulate a single file.

Rather than go into the details of all possible configurations, I'll just paste my own hosts.allow file here, and explain it line by line.
#
# hosts.allow This file describes the names of the hosts which are
#  allowed to use the local INET services, as decided
#  by the '/usr/sbin/tcpd' server.
#

# allow everyone to connect to 25.  ACL implemented in sendmail
sendmail: ALL

# ssh from certain hosts only.
sshd: 202.141.152.16 202.141.151. 202.141.152.210 127.0.0.1 : ALLOW

# Allow people within the domain to talk to me
in.talkd in.ntalkd: 202.141.151. 202.141.152. LOCAL : ALLOW
in.fingerd: 202.141.151. LOCAL EXCEPT 202.141.151.1 : ALLOW

# Set a default deny stance with back finger "booby trap" (Venema's term)
# Allow finger to prevent deadly finger wars, whereby another booby trapped
# box answers our finger with its own, spawning another from us, ad infinitum

ALL : ALL : spawn (/usr/sbin/safe_finger -l @%h | /bin/mail -s "Port Denial noted %d-%h" hostmaster) & : DENY

The above file starts off by allowing anyone to connect to my sendmail daemon. The sendmail daemon is in a better position to do access control, as this needs to be done based on sender and recipient address rather than IP address. If you suspect that certain hosts are unnecessarily hitting you on 25, then you can block them explicitly.

The next line allows ssh connections from certain specific hosts in the 202.141.152. domain, and all hosts in the 202.141.151. domain. I may need to connect to my machine from different places on my network. These connections would be over a broadcast network, so I prefer ssh for connecting.

I allow finger and talk from within my domain, but not from 202.141.151.1.

Finally, I set a booby trap for anyone connecting to services that they are not authorised to access. A reverse finger is done on the attacking host, and a mail is sent to the administrator of my machine with this information.

Intrusion Detection

Intrusion Detection is the ability to detect people trying to compromise your system. Intrusion detection is divided into two main categories, host based, and network based. Basically, if you use a single host to monitor itself, you are using a host based IDS, and if you use a single host to monitor your entire network, you are using a network based IDS. Most home users would use a host based IDS, while universities and offices would have a network based IDS.

There are many Intrusion Detection Systems (IDS) for linux, the most popular, for host based and network based is snort. Others are portsentry and lids - the Linux Intrusion detection system [inactive as of 2013]. Going into the details of each of these is beyond the scope of this document, but all tools have very good
documentation.

In addition to an IDS, you would also want to use an Integrity checker, which basically makes sure that none of your binaries and critical configuration files have been modified.

When a cracker compromises a system, the first thing he's likely to do, is create a backdoor for himself. There have been many instances where critical binaries like the ssh daemon have been replaced with trojaned versions that capture passwords and mail them back to the cracker. This
then gives the attacker free access to the system, even if the original hole is plugged.

Tools like tripwire, AIDE, and FreeVeracity check the integrity of your binaries. Of the above, FreeVeracity is reputed to be very easy to set up and use.

Typically, one would create an integrity database when the system is installed, and update it whenever new binaries are installed. The database should always be backed up onto read-only media like a CD. The checker should be run everyday through a cron tab entry, to check all critical files. If the tool finds any discrepancies, it sends a mail to a pre-defined email address.

The Integrity Checker should be configured well to prevent false alarms which makes it a hindrance more than an aide.
So, how do you know whether you've been compromised or not?
CERT has released an advisory to help you identify if an intruder is on your system.

In short though:
  • Check your log files,
  • Look for setuid/setgid files, especially if they are owned by root
  • Check what your integrity checker has to say about your system binaries
  • Check for packet sniffers which may or may not be running
  • If you didn't install it, it shouldn't be there
  • Check your crontabs and at queues.
  • Check for services that shouldn't be running on your system
  • Check /etc/passwd for new accounts/inactive accounts that have suddenly become active
Full details, including how to do the above are listed in the abovementioned document.
So what do you do once you know that you've been compromised?
Well, the first thing is not to panic. It is very important not to disturb any trails that the cracker has left behind, as these can all be used to detect who the attacker was, and even exactly what he did. Very importantly, don't touch anything on the system.

Step one is to disconnect the machine from the network. This will not only prevent further attacks, it will also prevent the attacker from covering up his trails if he finds out that he's been caught.

To prevent any data from being changed, you should also mount your file systems read-only.

Copy all your log files out to another system, or a floppy disk, where you can examine them safely.

Analyse the saved data to determine what the attacker did to break in and what he did after that.

Restore your system from known pre-compromise backups.

Again, CERT has published a white paper on recovering from an attack.

Testing Security

There are many commercial organisations that will test the security of your system for you. These are costly though. A cheaper alternative may be to use one of the many web based security scanners to test your system.

http://www.hackerwhacker.com
http://maxvision.net/#free
http://www.grc.com
http://privacy.net/analyze
http://www.secure-me.net/

You shouldn't trust what they tell you, but it will be interesting to monitor your logs and network while an attack is in progress.

You can also test yourself by using your own port scanner and security scanners.

Nmap is the most popular and widely used port scanner around, both by black hats and white hats. It can also determine which OS you use, which is what a cracker would need to know to find OS specific vulnerabilities.

If nmap can't figure out which OS you're using, that could slow down your attacker for a while.

SATAN (Security Analysis Tool for Auditing Networks) that was developed by Dan Farmer of Sun Microsystems, and Wietse Venema (of tcpd and postfix fame) from Eindhoven University of Technology, Netherlands, and currently at IBM, was developed with the specific intent of doing everything that an attacker would do to gain unauthorised access. This tool has been replaced with a next generation version called SAINT

Nessus has a plugin based architecture. Vulnerabilities checks are written as plugins, which means that you can check for new holes as they become publicly known, without upgrading your entire binary.

Viruses and Trojans

The real question in this section is, is linux vulnerable to viruses and trojans.

Practically, no. Technically though, it is possible.

Due to the design of Linux, it is difficult for viruses to spread far within a system, as they are confined to infecting the user space of the user who executes them. Of course, this is a problem if infected files are launched by root, but as a security conscious individual, you wouldn't be running untrusted files as root, would you?

It is theoretically possible for a virus launched by a regular user to escalate its privileges using system exploits; however, a virus with this capability would be quite sizable, and difficult to write. As of this date, few viruses have actually been discovered for Linux, and the ones that have been discovered aren't worth losing sleep over. This will undoubtedly change with time.

Worms like l10n and Top Ramen only worked because the systems were insecure to begin with. An insecure ftpd/rstatd was used to automatically gain access to machines, and use them as further launching grounds.

Viruses do exist for Linux, but are probably the least significant threat you face. On the other hand, Linux is definitely vulnerable to trojans.

A trojan is a malicious program that masquerades as a legitimate application. Unlike viruses, they do not self replicate, but instead, their primary purpose is (usually) to allow an attacker remote access to your computer or its resources. Sometimes, users can be tricked into downloading and installing trojans onto their own computers, but more commonly, trojans are installed by an intruder to allow him future access to your box.

Trojans often come packaged as "root kits". A "root kit" is a set of trojaned system applications to help mask a compromise. A root kit will usually include trojaned versions of ps, getty, passwd.

At this point in time, virus Scanners for Linux are aimed at detecting and disinfecting data served to Windows hosts by a Linux file/mailserver. This can be useful to help stop the spread of viruses among local, non-Unix machines. Due to the lack of viruses for Linux, there are presently no scanners to detect viruses within the Linux OS, or its applications. Trojans present a greater threat to the Linux OS itself than do viruses, and can be detected by regularly verifying the integrity of your binaries, or by using a rootkitdetector.
Trojan Detectors:
Chkrootkit: Checks Linux system for evidence of having been rootkitted.

Root Kit Detector: A daemon that alerts you if someone atttempts to rootkit you.
Virus Scanners for Linux File Servers:
AMaViS: A sendmail plugin that scans incoming mmail for viruses.

AntiVir for Linux: Scans incoming mail and ftp for virusess.

Interscan Viruswall: A Firewall-1 add-on that scans ftp, htttp, and smtp for viruses.

Sophos AntiVirus: Checks shares, mail attachments, ftp, etc. for viruses.

Finally, a system administrator must understand that security is a process. You need to keep yourself up to date with all the latest security news. Subscribe to the securityfocus, cert, and other security related mailing lists. Stick to the comp.os.linux.security newsgroup. That's also a good place to post your queries - if they haven't already been answered (hey, most of this doc was from the faq in there).

Monitor your log files regularly. Use remote logging to protect against modified log files. Protect your system binaries. Keep them on read-only partitions if required.

The only way to protect yourself completely, is to be aware of what is happening all the time.

References:

The comp.os.linux.security faq
The linux security howto
The linux administrator's security guide
The linux firewall and proxy server howto
CERT advisories
Security Focus

Friday, December 31, 2004

Nvidia GeForce MX 2 with linux 2.6.6+

I've been using a GeForce MX 2 for well over a year. It worked quite well with RH8, FC1 and Knoppix. I needed to use the proprietary drivers provided by Nvidia to get hardware acceleration though.

Motherboard: ASUS A7N266

A couple of months ago, upgraded to FC2, and the nvidia driver wouldn't work anymore. I had to run back to Bangalore, and since no one at home really needed hardware acceleration, I switched back to the free nv driver from X (well, I was using x.org now).

This December... well, yesterday actually, I decided to try out 3ddesktop, but of course, this requires hardware acceleration. So I started. Went through a lot to get it to work, and the details are boring. However, what I learnt could help others, so I'll document that.

The problem:

When starting X with the nvidia driver, the screen blanked out and the system froze. Pushing the reset button is the only thing that worked.

Solutions and Caveats:

Get the latest NVIDIA drivers and try.

At the time of writing, the latest drivers from the nvidia site are in the 1.0-6629 package. This doesn't work with the GeForce MX 2, and many other older chips, so if you try to use it, you'll spend too much time breaking your head for nothing. Instead, go for the 1.0-6111 driver, which does work well...

On kernels below 2.6.5 that is. FC2 ships with a modified 2.6.5 kernel that has a forced 4K stack and CONFIG_REGPARM turned on. The NVIDIA drivers are (or were) compiled with 8K stacks and do not work with CONFIG_REGPARM turned on. I'd faced similar problems when I first used the nvidia driver, and recompiling my kernel with 8K stacks fixed the problem.

Searching the net, I came across dozens of articles that spoke about 4K stacks v/s 8K stacks in the 2.6 kernel, but also said that from 5xxx onwards, the nvidia driver supported 4K stacks and CONFIG_REGPARM.

I tried getting prebuilt kernels (smaller download) with 16K stacks, but it didn't help, so finally decided to downlad the entire 32MB kernel source for 2.6.10.

While compiling, I came across this thread on NV News (pretty much the best resource for nvidia issues on linux). In short, the 6111 driver wouldn't work with kernels above 2.6.5 or something like that. I needed to patch the kernel source.

The patch was tiny enough: in arch/i386/mm/init.c, add a single line:
EXPORT_SYMBOL(__VMALLOC_RESERVE);
after the __VMALLOC_RESERVE definition.

Stopped compilation, made the change and restarted compilation.

Also had to rebuild the NVIDIA driver package, again as documented in that thread:

- extract the sources with the command : ./NVIDIA-Linux-x86-1.0-6111-pkg1.run --extract-only
- in the file "./NVIDIA-Linux-x86-1.0-611-pkg1/usr/src/nv/nv.c" replace the 4 occurences of
'pci_find_class' by 'pci_get_class'
- repack the nvidia installer with the following command:

sh ./NVIDIA-Linux-x86-1.0-6111-pkg1/usr/bin/makeself.sh --target-os Linux --target-arch x86 NVIDIA-Linux-x86-1.0-6111-pkg1 NVIDIA-Linux-x86-1.0-6111-pkg2.run "NVIDIA Acclerated Graphics Driver for Linux-x86 1.0-6111" ./nvidia-installer

The new installer is called "NVIDIA-Linux-x86-1.0-6111-pkg2.run"

With these changes, the driver compiled successfully and I was able to insert it.

I had a minor problem when rebooting. usbdevfs has become usbfs, so a change has to be made in /etc/rc.sysinit. Change all occurences of "usbdevfs usbdevfs" to "usbfs none".

Once you've done this, X should start with acceleration on.

3ddesktop is slow, but it starts up. Tux racer works well.

What I think is really cool about this solution, is that I did not have to make a single post to a single mailing list or forum. All the information I needed was already on the net. It was just a matter of reading it, understanding what it said, and following those instructions. For example, there were many threads on screen blanking with the 6629 driver, and somewhere in there was mentioned that the new driver didn't support older hardware, but that the 6111 did. That was the key that brought me the solution. I knew the 6111 didn't work out of the box, because I'd already tried it, but now I could concentrate on threads about the 6111 exclusively, only looking for anything that sounded familiar.

Saturday, November 13, 2004

/home is NTFS

A little over a year ago, at my previous company, I had to change my second harddisk on my PC. It was a bit of an experience, because the service engineer who came to do the job had never encountered linux before, but seemed to think that he could deal with it just like he did windows.

The engineer put in the new hard disk as a secondary master (my old one was a secondary slave to the CDD).

He then booted using a Win 95 diskette... hmm... what's this? Then started some norton disk copy utility. It's a DOS app that goes into a graphics mode... why?

Then started transferring data... hey, wait a minute, I don't have any NTFS partitions. Hit reset! Ok, cool down for a minute. I've got three ext3 partitions. So, now it's time to assess the damage.

Boot into linux - hmm, /dev/hdd1 (/home) won't mount, down to root shell. Get /home out of /etc/fstab and reboot. Ok, runlevel 3 again. Check other partitions - hdd5 (/usr) ... good, hdd6 (/usr/share) ... good... everything else is on hda... good. all my data, is in /home ... not good

So, I start trying to figure out how to recover. google... no luck. google again... one proprietary app, and a couple of howtos on recovering deleted files from ext2 partitions... no good. google again, get some docs on the structure of efs2, and find a util called e2salvage which won't build. time to start fooling around myself.

start by reading man pages. tune2fs, e2fsck, debugfs, mke2fs... so I know that mke2fs makes backups of the superblock, but where are they?

mke2fs -n /dev/hdd1... ok, that's where
dd if=/dev/hdd5 of=superblock bs=4096 count=2
hmm, so that's what a superblock looks like
dd if=/dev/hdd5 of=superblock2 bs=4096 count=2 skip=32768
hey, that's not a superblock. Ok, try various combinations, finally get this:
dd if=/dev/hdd5 of=superblock2 bs=1024 count=8 skip=131071
that's 32768*4-1
Ok, so that's where the second superblock is.

Check hdd1 - second superblock blown away as well. Look for the third... 98304*4-1=393215.. ok, that's good. should I dd it to the first? Hmm, no, e2fsck can do that for me... but, I shouldn't work on the original partition. Luckily I have 30GB of free space to play with, and /home is just 6GB.

dd if=/dev/hdd1 of=/mnt/tmp/home.img
cp home.img bak.home.img


Now I start playing with home.img.

The instructions in e2salvage said to try e2fsck before trying e2salvage, so I try that.

e2fsck home.img
no use... can't find superblock
e2fsck -b 32768 -B 4096 home.img
works... starts the fsck, and gets rid of the journal. this is gonna take too long if I do it manually, so I quit, and restart with:
e2fsck -b 32768 -B 4096 -y home.img
The other option would have been to -p(reen) it, but that wouldn't give me any messages on stdout, so I stuck with -y(es to all questions).

2 passes later it says, ok, got whatever I could.

mount -oloop,ro home.img /mnt/home
yippeee, it mounted
cd /mnt/home; ls
lost+found

ok, so everything's in lost+found, and it will take me ages to sift through all this. Filenames might give me some clues.
find . -type f | less
Ok, scroll, scroll, scroll... hmm, this looks like my home directory... yes.
cp -a \#172401 /mnt/home/philip
scroll some more, find /usr/share/doc (which I keep in /home/doc and symlink from /usr/share/doc). move it back to /usr/share/doc. find jdk1.1.8 documentation... pretend I didn't see that.

find moodle home - yay. find yabb home - yay again. Ok, find a bit more that's worth saving, and copy it over. Many files in each of these directories are corrupted, including mailboxes, and some amount of test data, but haven't found anything serious missing.

All code was in CVS anyway, so rebuilt from there where I had to.

Now decided to try e2salvage anyway, on the second copy of hdd1. It wouldn't compile. Changed some code to get it to compile, it ran, found inodes, directories and the works, then segfaulted. The program tries to read from inode 2, which doesn't exist on my partition, and then it tries to printf that inode without checking the return value.

I'd have fixed that, but the result is used in further calculations, so I just left it at that. The old hard disk was taken away, so I don't have anything to play with anymore.

It'll take me a little while to figure out all that was lost, but so far it doesn't look like anything serious.

Friday, August 27, 2004

Undelete in FreeBSD

A colleague of mine deleted a source file he'd been working on for over a week.

How do you undelete a file on a UFS partition? I'd done it before on ext2, I'd also recovered lost files from a damaged FAT32 partition (one that no OS would recognise), heck, I'd even recovered an ext3 file system that had been overwritten by NTFS. Why should undelete be tough?

Well, the tough part was that in all earlier instances, I had a spare partition to play on, _and_ I had root (login, not sudo) on the box, so could effectively boot in single user mode and mount the affected partition read-only. Couldn't do that here. I'd have to work on a live partition. sweet.

The first thing to do was power down the machine (using the power switch) to prevent any writes to the disk via cron or anything else. We then set about trying to figure out our strategy. A couple of websites had tools that could undelete files, but they'd have to be installed on the affected partition, so that was out of the question.

Now the machine has two partitions, one for / and one for /home. / is significantly smaller than /home, but has enough space to play around 100MB at a time. Decided to give it a try, copying /home to /tmp 10MB at a time.

Command:
dd if=/dev/ad0s1e of=deleted-file bs=1024k count=10

searched through the 10MB file for a unique string that should have been in the file. No match. Next 10 MB:
dd if=/dev/ad0s1e of=deleted-file bs=1024k count=10 skip=10

This was obviously going to take all night, so we decided to script it. (the code is broken into multiple lines for readability, we actually had it all on one line).
for i in 10 20 30 40 50 60 70 80 90; do
    dd if=/dev/ad0s1e of=deleted-file bs=1024k count=10 skip=$i;
    grep unique-string deleted-file && echo $i
done


We'd note down the numbers that hit a positive and then go back and get those sections again. Painful.

Ran through without luck. Had to then go from 100 to 190, so scripted that too with an outer loop:

for j in 1 2 3 4 5 6 7 8 9; do
    for i in 00 10 20 30 40 50 60 70 80 90; do
        dd ..... of=deleted-file ... skip=$j$i; ...

The observant reader will ask why we didn't just put in an increment like i=$[ $i+10 ]
Well, that runs too fast, and we wouldn't be able to break out easily. We'd have to do a Ctrl+C for every iteration to break out. This way the sheer pain of having to type in every number we wanted was enough to keep the limits low. That wasn't the reason. We did it because it would also be useful when we had to test only specific blocks that didn't have successive numbers.

IAC, the number of loops soon increased to 3, and the script further evolved to this:

for s in 1 2 3 4; do
    for j in 0 1 2 3 4 5 6 7 8 9; do
        for i in 00 10 20 30 40 50 60 70 80 90; do
            dd if=/dev/ad0s1e of=deleted-file bs=1024k count=10 skip=$s$j$i &>/dev/null;
            grep unique-string deleted-file && echo $s$j$i
        done
    done
done

Pretty soon hit a problem when grep turned up an escape sequence that messed up the screen. Also decided that we may as well save all positive hit files instead of rebuilding them later, so... broke out of the loops, and changed the grep line to this:

grep -q unique-string deleted-file-$s$j$i || rm deleted-file-$s$j$i

Were reasonably happy with the script to leave it to itself now. Might have even changed the iteration to an auto-increment, except there was no point changing it now since what we had would work for the conceivable future (going into the 10's place would be as easy as changing s to 10 11 12... and we didn't expect to have to go much further than 12 because the partition didn't have that much used space).

We finally hit some major positives between 8700 and 8900. Then started the process of extracting the data. 10MB files are too big for editors, and contains mostly binary data that we could get rid off. There was also going to be a lot of false positives because the unique (to the project) string also showed up in some config files that hadn't been deleted.

First ran this loop:

for i in deleted-file-*; do strings $i | less; done

and tried to manually search for the data. Gave up very soon and changed it to this:

for i in deleted-file-*; do echo $i; strings $i | grep unique-string; done

This showed us all lines where unique-string showed up so we could eliminate files that had no interesting content.

We were finally left with 3 files of 10MB each and the task of extracting the deleted file from here.

The first thing was to find out where in the file the code was. We first tried this:

less deleted-file-8650

search for the unique string and scroll up to the start of the text we wanted. Ctrl+G told us the position into the file that we were at (as a percent of the total). Then scroll to the end and again find the percent.

Now, we were reading 10 blocks of 1MB so using the percentage range, could narrow that down to 1 block.

Again got a percentage value within this 1MB file, and now swapped the block size and count a bit. So we went from 1 block of 1024k to 256 blocks of 4k each. Also had to change the offset from 8650 to 256 times that much. bc came in handy here.

I should probably mention that at this point we'd taken a break and headed out to Guzzler's Inn for a couple of pitchers and to watch the olympics. 32/8 was a slightly hard math problem on our return. Yathin has a third party account of that.

We finally narrowed down the search to 2 2K sections and one 8K section, with about 100 bytes of binary content (all ASCII NULLs) at the end of one of the 2K sections. This section was the end of the file. Used gvim to merge the files into one 12K C++ file complete with copyright notice and all.

If you plan on doing this yourself, then change the three for loops to this:
i=10;
while [ $i -lt 9000 ]; do
    dd ...
    i=$[ $i+10 ];
done

Secondly, you could save a lot of time by using grep -ab right up front so you'd get an actual byte count of where to start looking, and just skip the rest. Some people have suggested doing the grep -ab right on the filesystem, but that could generate more data than we could store (40GB partition, and only 200MB of space to store it on).

...===...