User VM Networking

One of the major ways for the User VM to communicate with the outside world is through a number of networking interfaces exposed by the hypervisor.

As shown on the System Overview page the User VM does not have any direct connection to the hardware or outside world all commnuication is passed in one way or another through the ACU6 Base System which has some effect on the behaviour of those interfaces.

Interfaces

The hypervisor exposes a number of virtual ethernet interfaces to the User VM, some of them representing physical network connections on the ACU6-Pro, other represent virtual tunnels betweeen the User VM and the ACU6 Base System. In addition there are two SocketCAN interfaces mapping to the two physical CAN interfaces present on ACU6-Pro.

Idx

Interface

Name

Type

MAC Address

IP Address

0

eth0

ipcbr

Virtual bridge

00:16:3E:0E:03:01

198.18.2.2

1

eth1

inetbr

Virtual bridge

00:16:3E:0E:03:02

198.18.3.2

2

eth2

T1

Physical bridge

<device specific>

198.18.6.2 [1]

3

eth3

TX

Physical bridge

<device specific>

198.18.4.2 [1]

4

eth4

diagbr

Virtual bridge

00:16:3E:0E:03:03

198.18.1.3 [1]

5

eth5

wifibr

Virtual bridge

00:16:3E:0E:03:04

198.18.123.2

0

can0

can0

PV-CAN

<not applicable>

1

can1

can1

PV-CAN

<not applicable>

Physical Networks

The User VM interfaces connected to the physical interfaces, that is eth2 and eth3 connected to the T1 and the TX (Ethernet) interfaces respectively, is the most straight forward and easiest to understand.

Each of the User VM interfaces is directly bridged by ACU6 Base System to the corresponding external hardware interfaces. This means that any and all traffic on the physical interface is mirrored on the bridge, and everything sent to the bridge is sent out to the physical interface without any processing done by the ACU6 Base System.

Warning

The direct bridging means the User VM is potentially exposed to unwanted or unexpected connections from the outside. It is up to the developer building a User VM to properly setup and configure firewalls, filtering rules and similar measures to ensure the security of the system.

The physical interfaces must be enabled by requesting the appropriate resource from the Ethernet.

Virtual Tunnels

Virtual tunnels provide a software-only interface between the User VM and the ACU6 Base System that is not directly connected to any physical network interface. The four tunnels that exists in the system have somewhat different usage and characteristics.

ipcbr

is used for direct communication between the ACU6 Base System and user applications running in the User VM. As the name suggest the main use is for AIPC, but also a few other services such as usbip and BTstack uses the ipcbr virtual tunnel.

diagbr

is bridged to the USB Ethernet interface used for development and debugging, for example it can be used for direct SSH access to the User VM during development. The direct bridging in some ways makes it similar to the physical interfaces in terms of security and it is recommended that a final production built User VM disables this interface.

inetbr

provides internet access to the User VM. It will dynamically be routed via either the NAD or the WiFi Station depending on which is connected. If both are connected the WiFi takes precedence.

The connection between inetbr`and internet is done using :term:`NAT, which means that packages from User VM will have their IP addresses rewritten. This generally does not cause any issues implementing networking clients on the User VM however care must be taken when trying to implement server services or any type of peer-to-peer communication.

SocketCAN

SocketCAN provides a network-like interface to CAN busses. The ACU6 Base System exposes two interfaces, called PV-CAN to the User VM that implements the SocketCAN interface and corresponds to the two physical interfaces on the ACU6-Pro.

Just as with the virtual Ethernet interfaces described above the SocketCAN interfaces within the User VM are not directly connected to the underlying physical CAN interface on the system but instead there is a gateway component within the ACU6 Base System that takes all CAN frames sent to the virtual interface by the User VM and replicates them to the physical interface, and vise versa where CAN frames received on the physical interface is duplicated on the virtual interface to the User VM.

The fact that there are two different CAN interfaces involved for each physical channel has a few repercussions for software running in User VM.

The first is related to configuration of bitrates, FD mode etc. The User VM code first needs to configure the physical interface using the CAN and then also configure the virtual interface inside the User VM using normal configuration tools such as the ip tool.

The second issue is that certain options sometimes used togehter with SocketCAN sockets work in unexpected ways. RAW SocketCAN options, including CAN_RAW_FILTER, CAN_RAW_LOOPBACK and CAN_RAW_RECV_OWN_MSGS, when set inside of the User VM only applies to the virtual interface between the User VM and the ACU6 Base System and thus cannot be used to setup hardware filtering or reliably ensure frames are delivered to the physical CAN bus. Similarly CAN frame timestamps are set when received on the virtual interface, not the reception on the physical interface. In practice it will be very close however under high load situation buffering can cause small shifts to be introduced.

Warning

Currently the first CAN bus must be requested before the second CAN bus, otherwise unexpected behavior will result. For any User VM that only needs access to the second physical CAN bus, a “dummy” request and configuration of the first bus must be performed.

Network routing

Starting with base system version 10.7 / SDK version 10.3 a new service for connectivity monitoring and configuration was introduced. In addition to reporting the status of different interfaces, it also allows configuration of routing for different origins of internet traffic.

For details see the Connectivity Management documentation.