3

I am clearing out a load of old hard disks, and reformatting them to ensure they are blank. I previously had an x86 PC with BIOS RAID running Fedora and several sets of 4 drives which IIRC were RAID 0+1 arrays, in various sizes inc 40Gb and 500Gb. That machine is dead and I cannot boot it.

All of these drives, and none of the others cannot be formatted.

I have disposed of most of my old PCs and all I have to work with is a USB IDE adaptor which I use to mount drives on my Mac, and an old x86 PC (which does not have a RAID bios) on which I have CentOS 7 minimal installed.

When mounted on the Mac the 500Gb drives show up as 1.6Tb volumes (yellow icons) with a single partition of a more reasonable size.

enter image description here

diskutil list returns this for the drive:

/dev/disk2 (external, physical):
   #:                       TYPE NAME                    SIZE       IDENTIFIER
   0:     FDisk_partition_scheme                        *1.6 TB     disk2
   1:                      Linux ⁨⁩                        500.1 GB   disk2s1

Neither the drive or the partition can be formatted (erase menu option), in either case the error

Wiping volume data to prevent future accidental probing failed. : (-69825)

is reported. I tried all the options for partition scheme and filesystem type (I can't see how that would make a difference but a question elsewhere suggested it might).

enter image description here

After googling similar questions I mounted the drive on the x86 machine, with one good boot drive containing CentOS 7 minimal and one bad drive. The boot drive shows up as /dev/sda and the failed drive as /dev/sdb.

Fdisk seems to run successfully, but a subsequent fdisk -l shows nothing.

I used dd if=/dev/zero of=/dev/sdb to write zeros over the whole of a 40Gb drive and the start and end of a 500Gb drive. When I dd the first block back it contains all zeros.

Again Fdisk seems to run successfully to create whatever type of super block and 1 partition I like, but the disk is still blank afterwards.

Then the weirdest thing happened. I decided to dd a copy of the superblock from the good drive onto the failed drive, it should look like there is a filesystem there even if it won't actually read, right?

No it did not. So I dd the superblock back from the failed drive and do a cmp -l of that against the original and I notice that every difference is on an even address and in every case the original has the top bit set and the copy via the failed disk does not.

So I created a file with all FF's and verified that this is happening on every 16bit word:

[root@localhost ~]# hexdump dels
0000000 ffff ffff ffff ffff ffff ffff ffff ffff
*
0000200
[root@localhost ~]# dd if=/dev/zero of=/dev/sdb bs=512 count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0225038 s, 22.8 kB/s
[root@localhost ~]# dd if=/dev/sdb of=foo count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0150744 s, 34.0 kB/s
[root@localhost ~]# hexdump foo
0000000 0000 0000 0000 0000 0000 0000 0000 0000
*
0000200
[root@localhost ~]# dd if=dels of=/dev/sdb
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0283328 s, 18.1 kB/s
[root@localhost ~]# dd if=/dev/sdb of=foo count=1
1+0 records in
1+0 records out
512 bytes (512 B) copied, 0.0141502 s, 36.2 kB/s
[root@localhost ~]# hexdump foo
0000000 7fff 7fff 7fff 7fff 7fff 7fff 7fff 7fff
*
0000200
[root@localhost ~]# exit

I get the same result with multiple drives.

I wondered if it was a faulty IDE cable or interface so I disconnected the CD which was on its own on the secondary IDE channel and put the failed drive there, and got exactly the same results.

I am curious to know where OSX Diskutil is getting the 1.6Tb from, I guess it's reading some old RAID metadata from somewhere but not the very start or end of the drive, is that possible?

Once the drive has been zeroed entirely the Mac does not see it at all, so it seems like of the Mac was all I had there would be nowhere to go.

So it seems I have a set of drives which cannot write a 1 in the MSB of each 16 bit word, how is that even possible?

If anyone can explain what's happening (including why the Mac can't see the drive after it has been totally zeroed) I would love to understand it, or if anyone can suggest a way to reset the drives so they can be formatted I would appreciate it. I would prefer to put them on eBay for a nominal sum or give them away than see then go to landfill, but I seem to be approaching the point where mechanical erasure is the only option.

Update 1

Following John Kintzele's answer below I tried booting gparted from USB. The process sort of looked like it worked, but the drives were not readable elsewhere and the following errors were reported, which look like hardware errors to me, so perhaps all these drives are physically damaged in some way?

enter image description here

5
  • I am confused: All this using the same IDE/USB adapter? Commented Aug 7, 2023 at 21:03
  • Sorry, this is with the drive connected via IDE and the PC booted from a USB memory stick with the gparted live iso image as suggested by John Kintzele, but for some reason his answer has disappeared. Commented Aug 7, 2023 at 21:52
  • Okay, and regardless the drive you find this bitflip issue and/or ICRC ABRT errors? ICRC seems to suggest CRC error while host is communicating with the drive. This is not perse a cabling error. Commented Aug 7, 2023 at 22:31
  • I believe the results are the same regardless of drive and which IDE channel I use, I am half way through writing zeros over the whole of one of the 500Gb drives atm so can't retest to be absolutely certain. Commented Aug 8, 2023 at 7:46
  • One other weird thing is this, the PC doesn't like my USB IDE adaptor and does not boot with it connected, but I did get it to work once by plugging it in at just the right moment in the boot process and a drive which I had written 0xFFFF over the superblock was connected, when I read it I am sure I got 0xFFFF back, but when connected to the IDE interface I get 0x7FFF. The thing is I also get 0x7FFF when I read it through the USB adaptor on the Mac so I don't know what to think. I have been unable to get the USB adaptor to work a second time so had sort of disregarded the one off event. Commented Aug 8, 2023 at 7:50

1 Answer 1

0

Normally my approach is to physically mount a drive (one at a time - or, at least not RAIDed), and then run a dd if=/dev/zero of=/dev/sd<x> bs=16384 (16KB has been the fastest block size for other purposes, but you can leave it out if you want)

I've done this in parallel off live CDs like this:

dd if=/dev/zero of=/dev/sda bs=16384 &
dd if=/dev/zero of=/dev/sdb bs=16384 &
dd if=/dev/zero of=/dev/sdc bs=16384 &

etc

It sounds like you're seeing the raw size of 3 500 gig drives as if they were still RAIDed (1.6 TB is nominally 3x 500 GB - especially if one is reporting in gibibytes and the other in gigabytes)

Did you have 3 (or 6, if it was RAID10) drives in this array?

1
  • As I said, it was 4 drives which IIRC were RAID 0+1 array. The machine I am doing this on does not support RAID, I am mounting one drive at a time. The 1.6Tb number must be coming off the drive somehow but I can't fathom it. Commented Aug 7, 2023 at 21:53

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .