Hey, (sorry for the 1st and long post 🙂)
so i have now twice within 3 months, run into issues with a bhyve vm started spewing errors in the thousands like: vtbd0: hard error cmd=write big number, big2 number
Then i have to kill -9 the bhyve process to kill it.
First time, i could like think it beeing ok, the ssd was like 5years or something.
Had a "new"=not opened WD Black, plugged that in, reinstalled freebsd 13.5 (zfs)
Basic usage for this machine is bhyve vms (3-5) running like beta, 15, a browser sandbox and mail sandbox vm.
Is running 50+gb images not good for ssds? it just does not make sense that two drives would fail so quick ontop of eachother. the 2nd one, going crazy after 2-3 months... no.
Anyway, i wanted to salvage data from one of the vms.
First you need to do something like:
$ mdconfig -a -t vnode -f fbvm.img -u 1
Then it will create /dev/md1p1, /dev/md1p2, /dev/md1p3 files (boot, swap, root)
After that you Should be able to:
$ zpool import
For it to show something like:
....
zroot
md1p3
now you should be able to import that to a new zfs pool.
$ zfs mount zroot # this results in something like saying zroot already exist.
Ok ok you think, just use a new name
$ zfs mount zroot zroot2 # mount as zroot2
But no no no.. no luck
Ideas welcome, i think i have screwed up the image anyway, but could be good to know.
Think ill start using UFS instead of zfs. I juuust want things to work, so i can sleep at night 😛
forums look cosy 🙂