(D241106) OpenWrt virtual machine on FreeBSD bhyve if_bridge tap (vmnet), netgraph, VALE and vether
WRITING
We're buiding our all-in-one server for our business.
The network is complicate when we want to run serveral virtual machine and still want to connect to outside. The router will be virtualized, it run OpenWrt or OPNsense as we want.
We have several choice:
1. Traditional way
It was not so efficiently.
We test on N4100 CPU:
Client | Server | Speed |
---|---|---|
vm2 (Debian) | vm0 (ImmortalWrt) | 2.11 Gbits/sec |
vm2 (Debian) | vm1 (Debian) | 2.17 Gbits/sec |
vm2 (Debian) | host (FreeBSD) | 1.35 Gbits/sec |
Our config:
$ cat /etc/rc.conf
cloned_interfaces="bridge0 tap0"
ifconfig_bridge0="dhcp addm em0 addm tap0"
ifconfig_em0="up"
$ cat /boot/loader.conf
if_bridge_load="YES"
if_tap_load="YES"
$ cat /pool/vm/openwrt/openwrt.conf
loader="uefi"
cpu=2
memory=512M
network0_type="e1000"
network0_switch="public"
network0_device="tap0"
disk0_type="ahci-hd"
disk0_name="immortalwrt-23.05.4-x86-64-generic-ext4-combined-efi.img"
2. More modern way
Of course there is epair that we can connect between host and vm.
We test on N4100 CPU:
Client | Server | Speed |
---|---|---|
vm2 (Debian) | vm0 (ImmortalWrt) | 3.50 Gbits/sec |
vm2 (Debian) | vm1 (Debian) | 5.30 Gbits/sec |
vm2 (Debian) | host (FreeBSD) | 1.48 Gbits/sec |
Our config:
$ cat /etc/rc.conf
defaultrouter="192.168.1.1"
cloned_interfaces="vether0"
ifconfig_vether0="192.168.1.2/24 up"
$ cat /boot/loader.conf
if_vether_load="YES"
$ cat /pool/vm/openwrt/openwrt.conf
loader="uefi"
cpu=2
memory=512M
network0_type="e1000"
network0_switch="public"
disk0_type="ahci-hd"
disk0_name="immortalwrt-23.05.4-x86-64-generic-ext4-combined-efi.img"
Need to run following script everytime after openwrt vm start (connect host to vm). If you have vm-bhyve you can put it to prestart
, remember to chmod +x
for it.
valectl -h vale-name:vether-name
, for example valectl -h vale0:vether0
.
3. The most complex thing
We test on N4100 CPU:
Client | Server | Speed |
---|---|---|
vm2 (Debian) | vm0 (ImmortalWrt) | 2.41 Gbits/sec |
vm2 (Debian) | vm1 (Debian) | 2.47 Gbits/sec |
vm2 (Debian) | host (FreeBSD) | 2.27 Gbits/sec |
ng_bridge
is much faster than vether
in this case: vm to host.
We think we could keep VALE as our vm switch, and replace vether by netgraph or epair?
netgraph as host virtual interface and vale as bridge, report bad pkt??? something was not right. we will look into it further and do it manually.
Client | Server | Speed |
---|---|---|
vm2 (Debian) | vm0 (ImmortalWrt) | 3.49 Gbits/sec |
vm2 (Debian) | vm1 (Debian) | 5.25 Gbits/sec |
vm2 (Debian) | host (FreeBSD) | 1.46 Gbits/sec |
Ref:
$ cat /boot/loader.conf
ng_eiface_load="YES"
ng_bridge_load="YES"
ng_ether_load="YES"
$ cat /etc/rc.conf
ngbridge_enable="YES"
ngbridge_names="lan"
ngbridge_lan_eifaces="nge_1u"
ngbridge_nge_1u_mac="00:37:92:01:02:02"
ngbridge_nge_1u_addr_num="1"
ngbridge_nge_1u_addr_1="inet 192.168.1.2/24"
ngbridge_lan_eifaces_keep="nge_1u"
ngbridge_lan_route_num=1
ngbridge_lan_route_1="-net default 192.168.1.1"
ngbridge_lan_vlans="NO"
$ cat /pool/vm/openwrt/openwrt.conf
loader="uefi"
cpu=2
memory=512M
network0_type="e1000"
network0_switch="lanbridge"
disk0_type="ahci-hd"
disk0_name="immortalwrt-23.05.4-x86-64-generic-ext4-combined-efi.img"