Friday, May 27, 2022

Nested Virtualization KVM/QEMU and Qubes-OS

 Virtualization and Qubes-OS, I find very useful in exploring complex packages or new versions of Linux. If I try a complex package and then find it's too complex, or it's dependencies are too complex or it really doesn't give me what I want, I just throw away the Virtual Machine (VM). If I like the VM, the Qube, I can copy it to other machines running Qubes-OS.


Qubes-OS is still my favorite, since you can spin up and/or throw away a VM in 30 seconds. I recently had a contract where I was using RedHat, and I converted my main Linux laptop to RedHat and have been using KVM/QEMU for virtualization. However, it's slow to create a new VM.

 

 I finally have nested virtualization working with Qubes-OS running as a VM under Redhat KVM/QEMU. I just want to put some notes down. I know this invalidates some of the security of Qubes-OS, at least according the authors, but for what I want it for, I think this is OK.


This Redhat page is helpful:


https://docs.fedoraproject.org/en-US/quick-docs/using-nested-virtualization-in-kvm/


First in Redhat 8.5, you need to enable the Input Output Memory Management Unit, IOMMU, support. My /etc/default/grub looks like this:

 

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto resume=/dev/mapper/rhel-swap rd.luks.uuid=luks-50fdf3da-12a5-450e-863b-6b981be120bc rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet intel_iommu=on iommu=pt modprobe.blacklist=nouveau"
GRUB_DISABLE_RECOVERY="true"
GRUB_ENABLE_BLSCFG=true

The important parts are enabling IOMMU, "intel_iommu", and enabling IOMMU passthrough "iommu=pt". I blacklist nouveau since I use the NVidia drivers so I have CUDA support. If the NVidia drivers detect nouveau or that nouveau was used for boot, the NVidia driver will refuse to run.

Next I created a /etc/modprobe.d/kvm.conf file containing:

 

# Setting modprobe kvm_intel/kvm_amd nested = 1
# only enables Nested Virtualization until the next reboot or
# module reload. Uncomment the option applicable
# to your system below to enable the feature permanently.
#
# User changes in this file are preserved across upgrades.
#
# For Intel
options kvm_intel nested=1
options kvm_intel enable_shadow_vmcs=1
options kvm_intel enable_apicv=1
options kvm_intel ept=1
#
# For AMD
#options kvm_amd nested=1

When creating the VM, I found that you need to "Copy host CPU configuration". For Qubes-OS 4.1, I chose a "Generic 2018" Linux kernel model. Qubes-OS uses a stripped down Fedora 32 Domain 0, dom0. I use the QXL VGA driver and Spice for the display server. This is the best combination of display driver support I have found so far for KVM/QEMU VMs.


Finally, you need to set Qubes-OS sys-net and sys-usb to Para Virtualized, PV according to this link:

 

https://www.qubes-os.org/doc/installation-troubleshooting/

 

The important part of this page is down at the bottom:

 

  1. Change the virtualization mode of sys-net and sys-usb to “PV”
  2. Add qubes.enable_insecure_pv_passthrough to GRUB_CMDLINE_LINUX in /etc/default/grub
  3. Run sudo grub2-mkconfig -o /boot/efi/EFI/qubes/grub.cfg
  4. Reboot

 I am currently using the non-UEFI boot, the non-OVMF BIOS, so my grub mkconfig is "sudo grub2-mkconfig -o /boot/grub2/grub.cfg".

 

And with that bit of configuration I am able to run a nested virtual Qubes-OS.



Thursday, May 19, 2022

What's Ed Doing with his Life?

 

You might be wondering what I have been doing with my life, and why I am not chasing contracts hard.

In general, I have been making progress on my house, my cars, and my computers. I haven't been chasing contracts as much as I should but I try to keep people knowing that I'm available. My preference would be to find something here in Ohio, or at least with people we know.

It's been about 15 years, since I really spent any significant time at my house and it shows. There is a possibility, I will be back at L3, either Florida or Utah, and it may be years again before I get another chunk of time to work on some of this.

The outside is improving, where I have spent most of my time. Adam Katter who does my lawn, is good at showing up and cutting the lawn when it needs to be cut. But his crew only cuts where they think its safe to cut. The area that they will cut has been shrinking for the last 15 years and I really wanted to get back to them mowing most of the lawn. With about three days work, I resurrected one of my two riding lawn mowers and savagely cut down lots of areas they would not mow. At the very end, the secondary deck belt snapped, but on Monday Katter's crew showed up and cut all of the area I mowed down. A victory and the lawn looks better.

Today was replacing the deck belt and un-doing some more of the jury-rigging someone had done to the deck. I couldn't find the right belt, but found one that was close. On my free John Deere, there is a relationship between the tension on the primary deck belt and the secondary deck belt. You need both belts to be the correct size in order for the tension to be good on both belts. I fashioned an offset plate for tensioning the primary deck belt until Amazon can bring me the correct belt.

I wanted to get the mower going to mow the spot where the old RV was sitting. The hoses and wires for the old RV were trapped under a collection of trash cans that Uncle had retried when they developed cracks or holes. With a sawsall, I was able to cut up the trash cans and make them and their contents fit in the newer trash can. Then I was able to disconnect the hoses and electricity to the RV, get it started and get it out of its hole. And then savagely cut down what was growing around it. Another victory..

I'm not sure what I want to do with the old RV, but I will at least replace the fan controller, pressure wash it with the recently repaired pressure washer, and probaby put a new exhaust on it.

Next big problem is I have a brush pile that is almost a story high. I got the Dodge Cummin's pickup running last weekend and plan to bring my cousin's chipper over and spend a day turning the brush pile into a smaller pile. I will put new wood on my trailer bed in order to do this.

Twice this winter, the van needed a bit of freon. I got under it today and it appears to be the seam where the two compressor halves come together. This means I will need to order another compressor.

My computers have been suffering without a real internet connection. It has been impractical to do many updates with metered cell phone hotspots. Now that I have Starlink, I have been going through and updating them. I have another laptop to backup and update tonight. I have another Qubes NUC that is not taking updates and will probably need to be backed up and re-installed. I think its just too far out of date.

 Now onto the laptop...

  

Wednesday, May 04, 2022

RISC-V Ubuntu Virtual Machine

I'm pretty enthused with the open RISC-V architecture and have been meaning to get a RISC-V virtual machine up and going so I can learn the assembly language. Like many pieces of open source software, it didn't go that easy.

 

There is Debian, Fedora and Ubuntu versions of Linux for the RISC-V. I randomly chose Ubuntu.

 

I had trouble with U-Boot not being able to find the Device Tree Binary, the .dtb file, and found a post, which indicated I should get a specific version of U-Boot and compile it.

 

From:

 

https://discourse.ubuntu.com/t/ubuntu-server-on-risc-v-documentation-needs-updating/23927/4

 

The user xypron, supplied a script to build U-Boot.

 

#!/bin/sh

set -e

if test ! -f opensbi/build/platform/generic/firmware/fw_payload.bin; then
wget https://cdimage.ubuntu.com/releases/20.04/release/ubuntu-20.04.3-preinstalled-server-riscv64+unmatched.img.xz
xz -dk ubuntu-20.04.3-preinstalled-server-riscv64+unmatched.img.xz
export CROSS_COMPILE=riscv64-linux-gnu-
git clone https://source.denx.de/u-boot/u-boot.git
cd u-boot/
git reset --hard v2021.10-rc3
make qemu-riscv64_smode_defconfig
make -j$(nproc)
cd ..
git clone https://github.com/riscv/opensbi.git
cd opensbi/
make PLATFORM=generic FW_PAYLOAD_PATH=../u-boot/u-boot.bin
cd ..
fi
qemu-system-riscv64 -machine virt -m 1G -nographic \
-bios opensbi/build/platform/generic/firmware/fw_payload.bin \
-smp cores=2 -gdb tcp::1234 \
-device virtio-net-device,netdev=net0 \
-netdev user,id=net0,tftp=tftp \
-drive if=none,file=ubuntu-20.04.3-preinstalled-server-riscv64+unmatched.img,format=raw,id=mydisk \
-device ich9-ahci,id=ahci -device ide-hd,drive=mydisk,bus=ahci.0 \
-device virtio-rng-pci

 

I had already downloaded the ubuntu-20.04.4 image, so I removed the wget and changed the 20.04.3 references to 20.04.4.

 

This script built a version of U-Boot and OpenSBI that allowed the Ubuntu RISC-V image to boot.

 

Another piece that was helpful to me was to resize the disk image.

 

From the helpful Ubuntu Wiki:

 

https://wiki.ubuntu.com/RISC-V


Optionally, if you want larger disk, you can expand the disk (filesystem will be automatically resized too).

qemu-img resize -f raw focal-preinstalled-server-riscv64.img +5G