Reference architecture

The reference architecture defines the minimum environment necessary to deploy OpenStack with Open Virtual Network (OVN) integration for the Networking service in production with sufficient expectations of scale and performance. For evaluation purposes, you can deploy this environment using the Installation Guide or Vagrant. Any scaling or performance evaluations should use bare metal instead of virtual machines.

Layout

The reference architecture includes a minimum of four nodes.

The controller node contains the following components that provide enough functionality to launch basic instances:

  • One network interface for management
  • Identity service
  • Image service
  • Networking management with ML2 mechanism driver for OVN (control plane)
  • Compute management (control plane)

The database node contains the following components:

  • One network interface for management
  • OVN northbound service (ovn-northd)
  • Open vSwitch (OVS) database service (ovsdb-server) for the OVN northbound database (ovnnb.db)
  • Open vSwitch (OVS) database service (ovsdb-server) for the OVN southbound database (ovnsb.db)

Note

For functional evaluation only, you can combine the controller and database nodes.

The two compute nodes contain the following components:

  • Three network interfaces for management, overlay networks, and provider networks
  • Compute management (hypervisor)
  • Hypervisor (KVM)
  • OVN controller service (ovn-controller)
  • OVS data plane service (ovs-vswitchd)
  • OVS database service (ovsdb-server) with OVS local configuration (conf.db) database
  • Networking DHCP agent
  • Networking metadata agent

Note

By default, deploying DHCP and metadata agents on two compute nodes provides basic redundancy for these services. For larger environments, consider deploying the agents on a fraction of the compute nodes to minimize control plane traffic.

Hardware layout Service layout

Networking service with OVN integration

The reference architecture deploys the Networking service with OVN integration as follows:

Architecture for Networking service with OVN integration

Each compute node contains the following network components:

Compute node network components

Note

The Networking service creates a unique network namespace for each virtual subnet that enables the DHCP service.

Accessing OVN database content

OVN stores configuration data in a collection of OVS database tables. The following commands show the contents of the most common database tables in the northbound and southbound databases. The example database output in this section uses these commands with various output filters.

$ ovn-nbctl list Logical_Switch
$ ovn-nbctl list Logical_Switch_Port
$ ovn-nbctl list ACL
$ ovn-nbctl list Logical_Router
$ ovn-nbctl list Logical_Router_Port

$ ovn-sbctl list Chassis
$ ovn-sbctl list Encap
$ ovn-sbctl list Logical_Flow
$ ovn-sbctl list Multicast_Group
$ ovn-sbctl list Datapath_Binding
$ ovn-sbctl list Port_Binding
$ ovn-sbctl list MAC_Binding

Note

By default, you must run these commands from the node containing the OVN databases.

Adding a compute node

When you add a compute node to the environment, the OVN controller service on it connects to the OVN southbound database and registers the node as a chassis.

_uuid               : 9be8639d-1d0b-4e3d-9070-03a655073871
encaps              : [2fcefdf4-a5e7-43ed-b7b2-62039cc7e32e]
external_ids        : {ovn-bridge-mappings=""}
hostname            : "compute1"
name                : "410ee302-850b-4277-8610-fa675d620cb7"
vtep_logical_switches: []

The encaps field value refers to tunnel endpoint information for the compute node.

_uuid               : 2fcefdf4-a5e7-43ed-b7b2-62039cc7e32e
ip                  : "10.0.0.32"
options             : {}
type                : geneve

Routers

Note

Currently, OVN lacks support for routing between self-service (private) and provider networks. However, it supports routing between self-service networks.

Instances

Launching an instance causes the same series of operations regardless of the network. The following example uses the provider provider network, cirros image, m1.tiny flavor, default security group, and mykey key.

Table Of Contents

Previous topic

FAQ

Next topic

Provider networks

This Page