Unlike the OmniOSce RAW image, the OpenIndiana Cloud image can be problematic on OVH.
Some users might have managed to get it running, but in my case it didn’t work.
This guide presents an alternative method that is both reliable and straightforward.
Process Overview:
- Perform a minimal OpenIndiana installation inside a VirtualBox VM.
- Create a small script to automatically configure the IP address when the target server boots for the first time.
- Export the VM, convert the .vmdk disk image to RAW format, and from a secondary Linux/BSD/illumos system (or from Windows using plink) write the RAW image directly to the NVMe/SSD of your hosted server over SSH.
- Boot from the installed image and, if needed, use IPMI/KVM to finalize network configuration.
1) Boot the Hosted Server in Rescue Mode (Debian 12), Wipe Disks, and Identify the Network Interface
In the OVH control panel, set the boot mode to Rescue (Debian 12) with a password (enter your email to receive the temporary password link).
Reboot the server from the OVH console.
Connect via SSH:
ssh root@PUBLIC-SERVER-IP
List the disks:
lsblk
In this example: /dev/nvme0n1
and /dev/nvme1n1
.
Wipe the beginning and end of each disk to remove ZFS labels (avoiding potential conflicts):
zpool labelclear -f /dev/nvme0n1
dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=100
dd if=/dev/zero of=/dev/nvme0n1 bs=1M seek=$(( $(blockdev --getsz /dev/nvme0n1) * 512 / 1024 / 1024 - 100 )) count=100
zpool labelclear -f /dev/nvme1n1
dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=100
dd if=/dev/zero of=/dev/nvme1n1 bs=1M seek=$(( $(blockdev --getsz /dev/nvme0n1) * 512 / 1024 / 1024 - 100 )) count=100
Identify the NIC used for the public IP:
ip a
In this example, it’s eth0
.
On OpenIndiana, the corresponding interfaces will be ixgbe0
and ixgbe1
, with ixgbe0
matching eth0
.
2) Create the OpenIndiana VM in VirtualBox, Prepare the Network Config, Export and Convert to RAW
Download the ISO from: https://openindiana.org/downloads
(Here we use the Minimal Install DVD version.)
Create a new VirtualBox VM:
--- Name: OImin-OVH
--- Type: Solaris
--- Version: OpenSolaris / Illumos / OpenIndiana (64-bit)
--- RAM: 2048 MB
--- CPUs: 1
--- Disk size: 4 GB
Install OpenIndiana, using the full disk for the OS. Do not configure a wired Ethernet connection during setup (select None).
After installation, reboot, remove the ISO, log in, and become root:
su -
Create the automatic network configuration script: /etc/init.d/network-setup.sh
#!/bin/bash
ipadm delete-ip ixgbe0 2>/dev/null
ipadm delete-ip ixgbe0 2>/dev/null
sleep 5
ipadm create-ip ixgbe0
ipadm create-addr -T static -a local=12.23.34.45/24 ixgbe0/v4
sleep 5
route -p add default 12.23.34.254
Make it executable:
chmod +x /etc/init.d/network-setup.sh
Add it to a startup script and create a symlink:
echo "nohup /etc/init.d/network-setup.sh &" >> /etc/init.d/startall.sh
chmod +x /etc/init.d/startall.sh
ln -s /etc/init.d/startall.sh /etc/rc3.d/S80-startall.sh > /dev/null 2>&1
Shut down the VM or add any other customizations (e.g., SSH key).
Remove the ISO and NIC MAC addresses, then export the VM as .ova.
Rename OImin-OVH.ova
to OImin-OVH.7z
, extract, and locate OImin-OVH-disk001.vmdk
inside.
Convert to RAW (Windows example):
New-Alias -Name "VBoxManage" -Value "C:\Program Files\Oracle\VirtualBox\VBoxManage.exe"
Set-Location C:\Users\YOURSELF\Desktop\OImin-OVH
VBoxManage clonemedium disk ".\OImin-OVH-disk001.vmdk" "OImin-OVH.raw" --format RAW
3) Transfer the RAW Image Over SSH
From Windows, the simplest method is to use a secondary Linux/BSD/illumos VM to stage the RAW image.
Transfer via SCP:
scp ./OImin-OVH.raw USER@SECONDARY-VM-IP:/tmp/
Connect to the secondary VM (by SSH for example), and write the RAW image directly to the hosted server:
dd if=/tmp/OImin-OVH.raw bs=1M | ssh root@12.23.34.45 "dd of=/dev/nvme0n1 bs=1M"
4) Boot the Hosted Server from the Installed Image
In the OVH control panel, set the boot mode back to Hard Disk and reboot.
Wait up to 5 minutes, then attempt SSH. If it's ok, go to 5)
If SSH fails, connect via IPMI/KVM to troubleshoot, then check interface names:
dladm
dladm show-phys -m ixgbe0
dladm show-phys -m ixgbe1
If ixgbe1
is the correct NIC, reconfigure:
ipadm delete-ip ixgbe0
ipadm create-ip ixgbe1
ipadm create-addr -T static -a local=12.23.34.45/24 ixgbe1/v4
route add default 12.23.34.254
Test connectivity:
ping 1.1.1.1
If successful, update /etc/init.d/network-setup.sh
with the correct interface.
5) Expand the rpool and Mirror to the Second NVMe
Check the current pool size:
zpool list
Get the disk name:
zpool status
Expand to use all available space:
zpool online -e rpool c3t001B448B46BE45C2d0
If you have a second NVMe, mirror it:
echo | format
zpool labelclear -f /dev/dsk/c4t001B448B46BBF825d0
ashift=$(zdb | awk '/ashift/ {print $2}')
zpool attach -o ashift="$ashift" rpool c3t001B448B46BE45C2d0 c4t001B448B46BBF825d0
zpool status
Your mirrored pool is now ready.