14

Issue :::

I install Redhat 5.10 x64 on server which had faulty hdd . I removed the old faulty hdd and installed new one with 500GB capacity and after installation i need to copy some data from old hdd to new HDD under /u001 . So i connected old hdd (320 gb) to server. It is showing in fdisk -l but when i try to mount using

sudo mount /dev/sdb2 or /dev/sdb5 it says

Note: Old hdd also had Old OS installed on it as you can see in fdisk -l
/dev/sda = New HDD
/dev/sdb = Old HDD

Device already mounted or resource is busy

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *           1          25      200781   83  Linux
/dev/sda2              26       10346    82903432+  8e  Linux LVM
/dev/sda3           10347       11390     8385930   82  Linux swap / Solaris
/dev/sda4           11391       60801   396893857+   5  Extended
/dev/sda5           11391       60801   396893826   8e  Linux LVM

Disk /dev/sdb: 320.0 GB, 320072933376 bytes
255 heads, 63 sectors/track, 38913 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *           1          13      104391   83  Linux
/dev/sdb2              14       10242    82164442+  8e  Linux LVM
/dev/sdb3           10243       11286     8385930   82  Linux swap / Solaris
/dev/sdb4           11287       38888   221713065    5  Extended
/dev/sdb5           11287       38888   221713033+  8e  Linux LVM
[admin@testsrv ~]$ sudo mount /dev/sdb2 /media/test/
mount: /dev/sdb2 already mounted or /media/test/ busy
[admin@testsrv ~]$ sudo mount /dev/sdb5 /media/test/
mount: /dev/sdb5 already mounted or /media/test/ busy

Mount Result:::

/dev/mapper/VolGroup00_root-LogVol00 on / type ext3 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
/dev/mapper/VolGroup00_u001-LogVol00 on /u001/app/oracle type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
tmpfs on /dev/shm type tmpfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)

PVDISPLAY :: output

    sudo pvdisplay
      --- Physical volume ---
      PV Name               /dev/sda5
      VG Name               VolGroup00_u001
      PV Size               378.51 GB / not usable 7.63 MB
      Allocatable           yes (but full)
      PE Size (KByte)       32768
      Total PE              12112
      Free PE               0
      Allocated PE          12112
      PV UUID               E2ibW6-uaDJ-7FMA-OZS0-sApR-DNwK-0jO3Ob

      --- Physical volume ---
      PV Name               /dev/sda2
      VG Name               VolGroup00_root
      PV Size               79.06 GB / not usable 392.50 KB
      Allocatable           yes
      PE Size (KByte)       32768
      Total PE              2530
      Free PE               1
      Allocated PE          2529
      PV UUID               YSGQwx-yIsO-CR0C-4G6r-GI9O-nUya-gE22yk

LVMDISkSCAN :: Output

sudo lvmdiskscan
  /dev/ramdisk                                                        [       16.00 MB]
  /dev/root                                                           [       79.03 GB]
  /dev/ram                                                            [       16.00 MB]
  /dev/sda1                                                           [      196.08 MB]
  /dev/mapper/ddf1_4035305a8680822620202020202020203532aa703a354a45   [      297.90 GB]
  /dev/ram2                                                           [       16.00 MB]
  /dev/sda2                                                           [       79.06 GB] LVM physical volume
  /dev/mapper/ddf1_4035305a8680822620202020202020203532aa703a354a45p1 [      101.94 MB]
  /dev/ram3                                                           [       16.00 MB]
  /dev/sda3                                                           [        8.00 GB]
  /dev/mapper/ddf1_4035305a8680822620202020202020203532aa703a354a45p2 [       78.36 GB] LVM physical volume
  /dev/ram4                                                           [       16.00 MB]
  /dev/mapper/ddf1_4035305a8680822620202020202020203532aa703a354a45p3 [        8.00 GB]
  /dev/ram5                                                           [       16.00 MB]
  /dev/sda5                                                           [      378.51 GB] LVM physical volume
  /dev/mapper/ddf1_4035305a8680822620202020202020203532aa703a354a45p5 [      211.44 GB] LVM physical volume
  /dev/ram6                                                           [       16.00 MB]
  /dev/VolGroup00_ora/LogVol00                                        [      211.44 GB]
  /dev/ram7                                                           [       16.00 MB]
  /dev/VolGroup00_u001/LogVol00                                       [      378.50 GB]
  /dev/ram8                                                           [       16.00 MB]
  /dev/ram9                                                           [       16.00 MB]
  /dev/ram10                                                          [       16.00 MB]
  /dev/ram11                                                          [       16.00 MB]
  /dev/ram12                                                          [       16.00 MB]
  /dev/ram13                                                          [       16.00 MB]
  /dev/ram14                                                          [       16.00 MB]
  /dev/ram15                                                          [       16.00 MB]
  /dev/sdb1                                                           [      101.94 MB]
  /dev/sdb2                                                           [       78.36 GB]
  /dev/sdb3                                                           [        8.00 GB]
  /dev/sdb5                                                           [      211.44 GB]
  3 disks
  25 partitions
  0 LVM physical volume whole disks
  4 LVM physical volumes
9
  • What's the output of mount?
    – csny
    Commented Jan 19, 2015 at 10:54
  • Can you show the output of findmnt or mount?
    – Spack
    Commented Jan 19, 2015 at 11:25
  • Also, the output of lsof +D /media/test/ would be helpful
    – csny
    Commented Jan 19, 2015 at 11:34
  • 1
    The problem is that the old disk doesn't have plain file systems on the partitions but has an LVM layer between te device and the file systems, as shown by the partition type. Ensure your new system has LVM tools installed, reboot with the old disk attached, and check lvdisplay to see what LVM devices are detected. You should be able to access those instead of /dev/sdbX.
    – wurtel
    Commented Jan 19, 2015 at 12:13
  • 2
    To all helpers, Sorry it was purely my mistake i totally forgot that this was lvm partitions. And need to mount using mount /dev/mapper/VG_u001 /media/test @wurtel can you tell what tools can be used to restore the files from LVMs.
    – OmiPenguin
    Commented Jan 19, 2015 at 14:23

3 Answers 3

8

If e.g.

mount /dev/sda1 /mnt/tmp

prints

mount: /dev/sda1 is already mounted or /mnt/tmp busy

check if there is any process using that device (/dev/sda1).

It is often a fsck process which runs automatically on system startup. You can check it quickly e.g. by

ps aux | grep sda1
1
  • That was exactly the root of my issue, thank you ! (still hoping that the disk will correctly mount after the fsck check has finished)
    – Franck
    Commented Aug 25, 2019 at 12:42
3

Even back in 5.x, RHEL was using LVM by default. You'll have to take a few steps first before you can mount LVM volumes.

If you used the same VGs name on the new disk as on the old one, you have a bit of a problem: you have two VGs with the same name. To uniquely identify the VGs you want to manipulate (i.e. the one on /dev/sdb), you'll need the VG UUIDs. Run:

# pvs -o +vg_uuid

to list all detected LVM PVs including their VG UUIDs. You'll also see the VG name of each partition, so you can see whether or not there are name conflicts.

LVM is by and large smart enough to not mess up your active VG configuration unless you go really out of your way to confuse it. So if the above-mentioned pvs command won't show anything on /dev/sdb, run vgscan and then try again.

Once you know the VG UUIDs, you can use the vgrename command to rename any conflicting VGs. If there are no name conflicts, you can skip ahead to vgchange.

(In order to mount the LV(s) inside a VG, you'll need to activate the VG, and and a VG won't activate if its name is conflicting with an already existing VG.)

The command to rename a VG looks like this:

vgrename Zvlifi-Ep3t-e0Ng-U42h-o0ye-KHu1-nl7Ns4 new_name_for_vg

where the Zvlifi-... alphabet soup is the VG UUID, and the other parameter is just the new name for this VG.

Once the VG name conflicts are resolved (or if there are no conflicts in the first place), you'll need to activate the VG(s) on /dev/sdb. You can simply activate all non-activated VGs LVM sees with this command:

vgchange -ay

When activating a VG, the device names(links) of any LVs inside it will appear as /dev/mapper/<VG name>-<LV name>. (Also as /dev/<VG name>/<LV name> for legacy compatibility reasons.)

At this point, you can mount them as usual.

1
  • This worked for me! I was getting the already mounted or busy error, so I did vgchange -ay and then was able to run: mount -t ext4 /dev/mapper/my--server--vg-root /tmp/myserver Commented Dec 1, 2019 at 16:20
0

I've faced such a situation. The experience and Solution is narrated in my blog.

the Snippet is here:

Error: mount: /dev/mapper/STORBCK-backup already mounted or /STORBCK busy?

Diagnostic: When we try to mount the /STORBCK FS we are getting above mentioned Error.

Resolution: 1. As the Other FS was became read-only, i've stop/start the iscsi service. it successfully logged into the Device. /etc/init.d/iscsi stop /etc/init.d/iscsi start https://manastri.blogspot.in/2016/11/mount-devmapperstorbck-backup-already.html

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .