post-deployment
ceph-health
Check the status of the ceph cluster.
Uses ceph health to check if cluster is in HEALTH_WARN state and prints a debug message.
- hosts: Controller
- groups: post-deployment
- metadata:
- fail_on_ceph_health_err: False
- osd_percentage_min: 0
- parameters:
- roles: ceph
View source code for the role.
containerized-undercloud-docker
Verify docker containers are up and ports are open.
Ensure relevant docker containers are up and running, with ports open to listen.
We iterate through a list of container names and ports provided in defaults, and ensure the system has those available.
- hosts: undercloud
- groups: post-deployment, pre-upgrade
- metadata:
- parameters:
- running_containers: [‘glance_api’, ‘heat_api’, ‘heat_api_cfn’, ‘heat_api_cron’, ‘heat_engine’, ‘ironic_api’, ‘ironic_conductor’, ‘ironic_inspector’, ‘ironic_inspector_dnsmasq’, ‘ironic_neutron_agent’, ‘ironic_pxe_http’, ‘ironic_pxe_tftp’, ‘iscsid’, ‘keystone’, ‘keystone_cron’, ‘logrotate_crond’, ‘memcached’, ‘mistral_api’, ‘mistral_engine’, ‘mistral_event_engine’, ‘mistral_executor’, ‘mysql’, ‘neutron_api’, ‘neutron_dhcp’, ‘neutron_l3_agent’, ‘neutron_ovs_agent’, ‘nova_api’, ‘nova_api_cron’, ‘nova_compute’, ‘nova_conductor’, ‘nova_metadata’, ‘nova_placement’, ‘nova_scheduler’, ‘rabbitmq’, ‘swift_account_auditor’, ‘swift_account_reaper’, ‘swift_account_replicator’, ‘swift_account_server’, ‘swift_container_auditor’, ‘swift_container_replicator’, ‘swift_container_server’, ‘swift_container_updater’, ‘swift_object_auditor’, ‘swift_object_expirer’, ‘swift_object_replicator’, ‘swift_object_server’, ‘swift_object_updater’, ‘swift_proxy’, ‘swift_rsync’, ‘tripleo_ui’, ‘zaqar’, ‘zaqar_websocket’]
- open_ports: [111, 873, 3000, 3306, 4369, 5000, 5050, 5672, 6000, 6001, 6002, 6379, 6385, 8000, 8004, 8080, 8088, 8774, 8775, 8778, 8787, 8888, 8989, 9000, 9292, 9696, 11211, 15672, 25672, 35357, 39422, {‘search_regex’: ‘OpenSSH’, ‘port’: 22}]
- roles: containerized-undercloud-docker
View source code for the role.
controller-token
Verify that keystone admin token is disabled.
This validation checks that keystone admin token is disabled on both undercloud and overcloud controller after deployment.
- hosts: undercloud, Controller
- groups: post-deployment
- metadata:
- parameters:
- keystone_conf_file: /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf
- roles: controller-token
View source code for the role.
controller-ulimits
Check controller ulimits.
This will check the ulimits of each controller.
- hosts: Controller
- groups: post-deployment
- metadata:
- parameters:
- nofiles_min: 1024
- nproc_min: 2048
- roles: controller-ulimits
View source code for the role.
haproxy
HAProxy configuration.
Verify the HAProxy configuration has recommended values.
- hosts: Controller
- groups: post-deployment
- metadata:
- parameters:
- config_file: /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg
- defaults_timeout_queue: 2m
- defaults_timeout_server: 2m
- global_maxconn_min: 20480
- defaults_maxconn_min: 4096
- defaults_timeout_client: 2m
- defaults_timeout_check: 10s
- roles: haproxy
View source code for the role.
image-serve
Verify image-serve service is working and answering.
Ensures image-serve vhost is configured and httpd is running.
- hosts: undercloud
- groups: pre-upgrade, post-deployment, post-upgrade
- metadata:
- parameters:
- roles: image-serve
View source code for the role.
neutron-sanity-check
Neutron Sanity Check.
Run neutron-sanity-check on the controller nodes to find out potential issues with Neutron’s configuration.
The tool expects all the configuration files that are passed to the Neutron services.
- hosts: Controller
- groups: post-deployment
- metadata:
- parameters:
- configs: [‘/etc/neutron/neutron.conf’, ‘/usr/share/neutron/neutron-dist.conf’, ‘/etc/neutron/metadata_agent.ini’, ‘/etc/neutron/dhcp_agent.ini’, ‘/etc/neutron/fwaas_driver.ini’, ‘/etc/neutron/l3_agent.ini’, ‘/usr/share/neutron/neutron-lbaas-dist.conf’, ‘/etc/neutron/lbaas_agent.ini’]
- roles: neutron-sanity-check
View source code for the role.
no-op-firewall-nova-driver
Verify NoOpFirewallDriver is set in Nova.
When using Neutron, the firewall_driver option in Nova must be set to NoopFirewallDriver.
- hosts: nova_compute
- groups: post-deployment
- metadata:
- parameters:
- roles: no-op-firewall-nova-driver
View source code for the role.
nova-event-callback
Nova Event Callback Configuration Check.
- This validations verifies that the Nova Event Callback feature is configured which is generally enabled by default. It checks the following files on the Overcloud Controller(s):
- /etc/nova/nova.conf:
[DEFAULT]/vif_plugging_is_fatal = True
[DEFAULT]/vif_plugging_timeout >= 300
- /etc/neutron/neutron.conf:
[nova]/auth_url = ‘http://nova_admin_auth_ip:5000’
[nova]/tenant_name = ‘service’
[DEFAULT]/notify_nova_on_port_data_changes = True
[DEFAULT]/notify_nova_on_port_status_changes = True
- hosts: Controller
- groups: post-deployment
- metadata:
- parameters:
- nova_config_file: /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf
- vif_plugging_fatal_check: vif_plugging_is_fatal
- neutron_config_file: /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf
- tenant_name_check: tenant_name
- vif_plugging_timeout_value_min: 300
- notify_nova_on_port_data_check: notify_nova_on_port_data_changes
- vif_plugging_timeout_check: vif_plugging_timeout
- notify_nova_on_port_status_check: notify_nova_on_port_status_changes
- roles: nova-event-callback
View source code for the role.
ntp
Verify all deployed nodes have their clock synchronised.
Each overcloud node should have their clocks synchronised.
The deployment should configure and run chronyd. This validation verifies that it is indeed running and connected to an NTP server on all nodes.
- hosts: overcloud
- groups: post-deployment
- metadata:
- parameters:
- roles: ntp
View source code for the role.
openstack-endpoints
Check connectivity to various OpenStack services.
This validation gets the PublicVip address from the deployment and tries to access Horizon and get a Keystone token.
- hosts: undercloud
- groups: post-deployment, pre-upgrade, post-upgrade
- metadata:
- parameters:
- roles: openstack-endpoints
View source code for the role.
ovs-dpdk-pmd-cpus-check
Validates OVS DPDK PMD cores from all NUMA nodes..
OVS DPDK PMD cpus must be provided from all NUMA nodes.
A failed status post-deployment indicates PMD CPU list is not configured correctly.
- hosts: ComputeOvsDpdk
- groups: post-deployment
- metadata:
- parameters:
- roles: ovs-dpdk-pmd
View source code for the role.
pacemaker-status
Check the status of the pacemaker cluster.
This runs pcs status and checks for any failed actions.
A failed status post-deployment indicates something is not configured correctly. This should also be run before upgrade as the process will likely fail with a cluster that’s not completely healthy.
- hosts: Controller
- groups: post-deployment
- metadata:
- parameters:
- roles: pacemaker-status
View source code for the role.
rabbitmq-limits
Rabbitmq limits.
Make sure the rabbitmq file descriptor limits are set to reasonable values.
- hosts: Controller
- groups: post-deployment
- metadata:
- parameters:
- roles: rabbitmq-limits
View source code for the role.