Hetzner - Emergency access - Case Aug. 2024
While experimenting with iptables in Aug. 2024, I locked myself out from a testserver. Restarting wouldn't fix the problem, so time for the Rescue system - And thank god, it was a testserver so there wasn't much pressure.
Plan
- Start this server through PXE with Hetzner's 'Linux-like rescue OS' (based on Debian)
- Access through SSH
- Mount the storage device(s) that contain
/etc/ssh/sshd_config
- In this file, change SSH port number back to 22
- Restart server and have SSH access again.
Start rescue OS & get SSH access
- Robot » Rescue: Operating system: Linux. Public key: the one from the computer that I'm currently behind. A login name and password are given
- Robot » Reset » Execute an automatic hardware reset
- Log in using
ssh 123.45.67.89 -l jeroen
- I get an error about change in fingerprint. That makes sense, as now the server is booted through PXE from an image on a DHCP server » Execute the given command to remove this
- Try again:
ssh 123.45.67.89 -l jeroen
...
...And I'm in!
Figuring out the RAID configuration
This server has a bunch of storage. Which one to mount? How to take into consideration that this is probably RAID1?
lsblk
Output of lsblk
with the result of trying to mount the respective drives, in the column Mounting?
$ lsblk TYPE MOUNT- NAME MAJ:MIN RM SIZE RO POINTS Mounting? loop0 7:0 0 3.1G 1 loop sda 8:0 0 1.8T 0 disk |-sda1 8:1 0 4G 0 part → linux_raid_member |-sda2 8:2 0 1G 0 part → Wrong fs type, ... `-sda3 8:3 0 889.3G 0 part → Wrong fs type, ... sdb 8:16 0 1.8T 0 disk |-sdb1 8:17 0 4G 0 part → linux_raid_member |-sdb2 8:18 0 1G 0 part → Wrong fs type, ... `-sdb3 8:19 0 889.3G 0 part → Wrong fs type, ... nvme1n1 259:0 0 894.3G 0 disk |-nvme1n1p1 259:11 0 4G 0 part → linux_raid_member |-nvme1n1p2 259:12 0 1G 0 part → linux_raid_member `-nvme1n1p3 259:13 0 889.3G 0 part → linux_raid_member nvme0n1 259:4 0 894.3G 0 disk |-nvme0n1p1 259:8 0 4G 0 part → linux_raid_member |-nvme0n1p2 259:9 0 1G 0 part → linux_raid_member `-nvme0n1p3 259:10 0 889.3G 0 part → linux_raid_member
lsblk -f
$ lsblk -f NAME FSTYPE FSVER LABEL UUID FSAVAIL FSUSE% MOUNTPOINTS loop0 ext2 1.0 7ad8dd31-a35f-4a80-a784-98e1b3958266 sda linux_raid_member 1.2 dvb8v2:3 03626ea8-76c3-5eb8-f073-7d0a13cfa567 |-sda1 linux_raid_member 1.2 rescue:0 da695806-7c49-f98c-add2-c2a9b9fe0a04 |-sda2 `-sda3 sdb linux_raid_member 1.2 dvb8v2:3 03626ea8-76c3-5eb8-f073-7d0a13cfa567 |-sdb1 linux_raid_member 1.2 rescue:0 da695806-7c49-f98c-add2-c2a9b9fe0a04 |-sdb2 `-sdb3 nvme1n1 |-nvme1n1p1 linux_raid_member 1.2 rescue:0 616ba3b0-8829-9128-e105-64535295f062 |-nvme1n1p2 linux_raid_member 1.2 rescue:1 bac6ff1d-f130-2348-ad30-eb6164525574 `-nvme1n1p3 linux_raid_member 1.2 rescue:2 7ba7d7e2-dfa2-0a73-d1c2-abdcbda9ae99 nvme0n1 |-nvme0n1p1 linux_raid_member 1.2 rescue:0 616ba3b0-8829-9128-e105-64535295f062 |-nvme0n1p2 linux_raid_member 1.2 rescue:1 bac6ff1d-f130-2348-ad30-eb6164525574 `-nvme0n1p3 linux_raid_member 1.2 rescue:2 7ba7d7e2-dfa2-0a73-d1c2-abdcbda9ae99
mdadm - Partitions sda
$ mdadm --examine /dev/sda1 /dev/sda1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : da695806:7c49f98c:add2c2a9:b9fe0a04 Name : rescue:0 (local to host rescue) Creation Time : Sat Sep 23 09:36:27 2023 Raid Level : raid1 Raid Devices : 4 Avail Dev Size : 8378368 sectors (4.00 GiB 4.29 GB) Array Size : 4189184 KiB (4.00 GiB 4.29 GB) Data Offset : 10240 sectors Super Offset : 8 sectors Unused Space : before=10160 sectors, after=0 sectors State : clean Device UUID : eced0c6c:0eeee884:1b7e5e34:d7ad829e Update Time : Sat Sep 23 09:38:35 2023 Bad Block Log : 512 entries available at offset 16 sectors Checksum : cf8aabc - correct Events : 19 Device Role : Active device 3 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) $ mdadm --examine /dev/sda2 mdadm: No md superblock detected on /dev/sda2. $ mdadm --examine /dev/sda3 mdadm: No md superblock detected on /dev/sda3.
mdadm - Partitions sdb
$ mdadm --examine /dev/sdb1 /dev/sdb1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : da695806:7c49f98c:add2c2a9:b9fe0a04 Name : rescue:0 (local to host rescue) Creation Time : Sat Sep 23 09:36:27 2023 Raid Level : raid1 Raid Devices : 4 Avail Dev Size : 8378368 sectors (4.00 GiB 4.29 GB) Array Size : 4189184 KiB (4.00 GiB 4.29 GB) Data Offset : 10240 sectors Super Offset : 8 sectors Unused Space : before=10160 sectors, after=0 sectors State : clean Device UUID : 1ff892f2:7c6484c6:49d611ef:cc63ca22 Update Time : Sat Sep 23 09:38:35 2023 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 14153980 - correct Events : 19 Device Role : Active device 2 Array State : AAAA ('A' == active, '.' == missing, 'R' == replacing) $ mdadm --examine /dev/sdb2 mdadm: No md superblock detected on /dev/sdb2. $ mdadm --examine /dev/sdb3 mdadm: No md superblock detected on /dev/sdb3.
mdadm - Partitions nvme1n1
$ mdadm --examine /dev/nvme1n1p1 /dev/nvme1n1p2 /dev/nvme1n1p3 /dev/nvme1n1p1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 616ba3b0:88299128:e1056453:5295f062 Name : rescue:0 (local to host rescue) Creation Time : Sat Sep 23 12:34:12 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 8378368 sectors (4.00 GiB 4.29 GB) Array Size : 4189184 KiB (4.00 GiB 4.29 GB) Data Offset : 10240 sectors Super Offset : 8 sectors Unused Space : before=10160 sectors, after=0 sectors State : clean Device UUID : bcfaff15:f0f35036:5b28a91e:4e43dc97 Update Time : Thu Aug 8 02:49:18 2024 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 9c04525b - correct Events : 52 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/nvme1n1p2: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : bac6ff1d:f1302348:ad30eb61:64525574 Name : rescue:1 (local to host rescue) Creation Time : Sat Sep 23 12:34:12 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB) Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB) Data Offset : 4096 sectors Super Offset : 8 sectors Unused Space : before=4016 sectors, after=0 sectors State : clean Device UUID : b195e74e:f55259d2:64c1bdb5:4ea16e0e Update Time : Thu Aug 8 16:53:49 2024 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 2bb66c10 - correct Events : 86 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/nvme1n1p3: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 7ba7d7e2:dfa20a73:d1c2abdc:bda9ae99 Name : rescue:2 (local to host rescue) Creation Time : Sat Sep 23 12:34:12 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1864630960 sectors (889.13 GiB 954.69 GB) Array Size : 932315456 KiB (889.13 GiB 954.69 GB) Used Dev Size : 1864630912 sectors (889.13 GiB 954.69 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=48 sectors State : active Device UUID : b0652598:a8b4ea92:58bef365:08900e48 Internal Bitmap : 8 sectors from superblock Update Time : Thu Aug 8 17:12:07 2024 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 8e411884 - correct Events : 12729 Device Role : Active device 1 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
mdadm - Partitions nvme0n1
$ mdadm --examine /dev/nvme0n1p1 /dev/nvme0n1p2 /dev/nvme0n1p3 /dev/nvme0n1p1: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 616ba3b0:88299128:e1056453:5295f062 Name : rescue:0 (local to host rescue) Creation Time : Sat Sep 23 12:34:12 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 8378368 sectors (4.00 GiB 4.29 GB) Array Size : 4189184 KiB (4.00 GiB 4.29 GB) Data Offset : 10240 sectors Super Offset : 8 sectors Unused Space : before=10160 sectors, after=0 sectors State : clean Device UUID : 4de6c408:7aed42d3:ae139e7d:a8f0f929 Update Time : Thu Aug 8 02:49:18 2024 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 1ccdd023 - correct Events : 52 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/nvme0n1p2: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : bac6ff1d:f1302348:ad30eb61:64525574 Name : rescue:1 (local to host rescue) Creation Time : Sat Sep 23 12:34:12 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 2093056 sectors (1022.00 MiB 1071.64 MB) Array Size : 1046528 KiB (1022.00 MiB 1071.64 MB) Data Offset : 4096 sectors Super Offset : 8 sectors Unused Space : before=4016 sectors, after=0 sectors State : clean Device UUID : c93c4468:832813e8:bba43504:42901777 Update Time : Thu Aug 8 16:53:49 2024 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 11edbb00 - correct Events : 86 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing) /dev/nvme0n1p3: Magic : a92b4efc Version : 1.2 Feature Map : 0x1 Array UUID : 7ba7d7e2:dfa20a73:d1c2abdc:bda9ae99 Name : rescue:2 (local to host rescue) Creation Time : Sat Sep 23 12:34:12 2023 Raid Level : raid1 Raid Devices : 2 Avail Dev Size : 1864630960 sectors (889.13 GiB 954.69 GB) Array Size : 932315456 KiB (889.13 GiB 954.69 GB) Used Dev Size : 1864630912 sectors (889.13 GiB 954.69 GB) Data Offset : 264192 sectors Super Offset : 8 sectors Unused Space : before=264112 sectors, after=48 sectors State : active Device UUID : 9268d222:79240fd8:57a56064:81d90e59 Internal Bitmap : 8 sectors from superblock Update Time : Thu Aug 8 17:12:07 2024 Bad Block Log : 512 entries available at offset 16 sectors Checksum : 6d7fbbae - correct Events : 12729 Device Role : Active device 0 Array State : AA ('A' == active, '.' == missing, 'R' == replacing)
Configuration when ordered
1 x Dedicated Root Server AX161 * Location: Germany, FSN1 * Rescue system (English) * 1 x Primary IPv4 * 2 x 960 GB NVMe SSD Datacenter Edition * 2 x 2 TB SATA Enterprise Hard Drive
Conclusions
- Some partitions have identical UUIDs, indicating that they belong to the same RAID arrays
- The NVMe SSDs are primary storage (including the OS)
- Maybe some of the backup SATA disks, haven't even been formatted yet - Hence no filesystem information
- Those two NVMe SSDs are probably configured in RAID1, meaning that they are copies of eachother
- The primary (OS) storage is probably
nvme1n1p3
&nvme0n1p3
. They have matching UUIDs:7ba7d7e2-dfa2-0a73-d1c2-abdcbda9ae99
and matching sizes.
Assembling & mounting
Finally: Let's assemble the RAID array:
$ mdadm --assemble --uuid=7ba7d7e2-dfa2-0a73-d1c2-abdcbda9ae99 /dev/md0 /dev/nvme1n1p3 /dev/nvme0n1p3 mdadm: /dev/md0 has been started with 2 drives.
And let's mount it:
mkdir /mnt/md0 mount /dev/md0 /mnt/md0
Editing the ssh configuration file
cd /mnt/md0/etc vim sshd_config
Changed the SSH port number back to 22.
Unmount
Before restarting the server, let's unmount the device, just to be sure that data has been written:
cd ~ umount /mnt/md0
About cd ~
: You can't unmount when the PWD is the device :).
Reboot
When rebooting a Hetzner server from within Rescue Mode, it will boot again in the normal way:
Actual reboot
$ reboot Connection to 123.45.67.89 closed by remote host. Connection to 123.45.67.89 closed.
This is a dedicated server, rather than a VPS and restarting takes about a minute.
Fingerprint dance
And once again, a different computer presents itself at the other end of the SSH channel, and the ssh client rightly finds that alarming:
$ server12 @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY! Someone could be eavesdropping on you right now (man-in-the-middle attack)! It is also possible that a host key has just been changed. The fingerprint for the ECDSA key sent by the remote host is SHA256:fdfdsfdsfdsfdsfdsfdsfds. Please contact your system administrator. Add correct host key in /home/jeroen/.ssh/known_hosts to get rid of this message. Offending ECDSA key in /home/jeroen/.ssh/known_hosts:37 remove with: ssh-keygen -f "/home/jeroen/.ssh/known_hosts" -R "123.45.67.89" ECDSA host key for 123.45.67.89 has changed and you have requested strict checking. Host key verification failed.
Remove the old file:
$ ssh-keygen -f "/home/jeroen/.ssh/known_hosts" -R "123.45.67.89" # Host 123.45.67.89 found: line 37 /home/jeroen/.ssh/known_hosts updated. Original contents retained as /home/jeroen/.ssh/known_hosts.old
Log in again
$ server12 The authenticity of host '123.45.67.89 (123.45.67.89)' can't be established. ECDSA key fingerprint is SHA256:dsfdsfdsfdsfdsfdsfdsfds. Are you sure you want to continue connecting (yes/no/[fingerprint])? $ yes Warning: Permanently added '123.45.67.89' (ECDSA) to the list of known hosts.