2026-02-15 02:27:06.683795 | Job console starting 2026-02-15 02:27:06.701095 | Updating git repos 2026-02-15 02:27:06.795353 | Cloning repos into workspace 2026-02-15 02:27:06.979834 | Restoring repo states 2026-02-15 02:27:07.000748 | Merging changes 2026-02-15 02:27:07.000770 | Checking out repos 2026-02-15 02:27:07.271487 | Preparing playbooks 2026-02-15 02:27:07.957562 | Running Ansible setup 2026-02-15 02:27:13.369884 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-02-15 02:27:14.179761 | 2026-02-15 02:27:14.179940 | PLAY [Base pre] 2026-02-15 02:27:14.197050 | 2026-02-15 02:27:14.197184 | TASK [Setup log path fact] 2026-02-15 02:27:14.227984 | orchestrator | ok 2026-02-15 02:27:14.245672 | 2026-02-15 02:27:14.245809 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-15 02:27:14.286595 | orchestrator | ok 2026-02-15 02:27:14.298536 | 2026-02-15 02:27:14.298644 | TASK [emit-job-header : Print job information] 2026-02-15 02:27:14.341378 | # Job Information 2026-02-15 02:27:14.341608 | Ansible Version: 2.16.14 2026-02-15 02:27:14.341655 | Job: testbed-upgrade-stable-rc-ubuntu-24.04 2026-02-15 02:27:14.341702 | Pipeline: periodic-midnight 2026-02-15 02:27:14.341735 | Executor: 521e9411259a 2026-02-15 02:27:14.341764 | Triggered by: https://github.com/osism/testbed 2026-02-15 02:27:14.341794 | Event ID: 82b642e1f4d240ceac36cce41fbbc635 2026-02-15 02:27:14.350132 | 2026-02-15 02:27:14.350251 | LOOP [emit-job-header : Print node information] 2026-02-15 02:27:14.478745 | orchestrator | ok: 2026-02-15 02:27:14.479214 | orchestrator | # Node Information 2026-02-15 02:27:14.479306 | orchestrator | Inventory Hostname: orchestrator 2026-02-15 02:27:14.479372 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-02-15 02:27:14.479431 | orchestrator | Username: zuul-testbed03 2026-02-15 02:27:14.479485 | orchestrator | Distro: Debian 12.13 2026-02-15 02:27:14.479545 | orchestrator | Provider: static-testbed 2026-02-15 02:27:14.479600 | orchestrator | Region: 2026-02-15 02:27:14.479654 | orchestrator | Label: testbed-orchestrator 2026-02-15 02:27:14.479704 | orchestrator | Product Name: OpenStack Nova 2026-02-15 02:27:14.479754 | orchestrator | Interface IP: 81.163.193.140 2026-02-15 02:27:14.504416 | 2026-02-15 02:27:14.504600 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-02-15 02:27:15.046623 | orchestrator -> localhost | changed 2026-02-15 02:27:15.062267 | 2026-02-15 02:27:15.062447 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-02-15 02:27:16.167338 | orchestrator -> localhost | changed 2026-02-15 02:27:16.190428 | 2026-02-15 02:27:16.190568 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-02-15 02:27:16.510057 | orchestrator -> localhost | ok 2026-02-15 02:27:16.525752 | 2026-02-15 02:27:16.525937 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-02-15 02:27:16.564751 | orchestrator | ok 2026-02-15 02:27:16.585948 | orchestrator | included: /var/lib/zuul/builds/2104699b13df41e3896cc6406b0490a9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-02-15 02:27:16.594395 | 2026-02-15 02:27:16.594498 | TASK [add-build-sshkey : Create Temp SSH key] 2026-02-15 02:27:18.363067 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-02-15 02:27:18.363324 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/2104699b13df41e3896cc6406b0490a9/work/2104699b13df41e3896cc6406b0490a9_id_rsa 2026-02-15 02:27:18.363368 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/2104699b13df41e3896cc6406b0490a9/work/2104699b13df41e3896cc6406b0490a9_id_rsa.pub 2026-02-15 02:27:18.363396 | orchestrator -> localhost | The key fingerprint is: 2026-02-15 02:27:18.363421 | orchestrator -> localhost | SHA256:6XM3k8W/TRRMGcLOvVb2eGYSDUjlI1qyCfoRo8vPnXQ zuul-build-sshkey 2026-02-15 02:27:18.363445 | orchestrator -> localhost | The key's randomart image is: 2026-02-15 02:27:18.363482 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-02-15 02:27:18.363505 | orchestrator -> localhost | | .o+.oo| 2026-02-15 02:27:18.363528 | orchestrator -> localhost | | .o=. | 2026-02-15 02:27:18.363550 | orchestrator -> localhost | | + . = += | 2026-02-15 02:27:18.363571 | orchestrator -> localhost | | o = * =.o=| 2026-02-15 02:27:18.363591 | orchestrator -> localhost | | o S + o+=| 2026-02-15 02:27:18.363617 | orchestrator -> localhost | | . + . oo==| 2026-02-15 02:27:18.363637 | orchestrator -> localhost | | o + o E .=o| 2026-02-15 02:27:18.363657 | orchestrator -> localhost | | o = + o .o| 2026-02-15 02:27:18.363678 | orchestrator -> localhost | | o o ..| 2026-02-15 02:27:18.363699 | orchestrator -> localhost | +----[SHA256]-----+ 2026-02-15 02:27:18.363749 | orchestrator -> localhost | ok: Runtime: 0:00:01.251355 2026-02-15 02:27:18.371549 | 2026-02-15 02:27:18.371672 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-02-15 02:27:18.402287 | orchestrator | ok 2026-02-15 02:27:18.413130 | orchestrator | included: /var/lib/zuul/builds/2104699b13df41e3896cc6406b0490a9/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-02-15 02:27:18.422640 | 2026-02-15 02:27:18.422743 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-02-15 02:27:18.447076 | orchestrator | skipping: Conditional result was False 2026-02-15 02:27:18.456975 | 2026-02-15 02:27:18.457102 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-02-15 02:27:19.155611 | orchestrator | changed 2026-02-15 02:27:19.165669 | 2026-02-15 02:27:19.165808 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-02-15 02:27:19.482603 | orchestrator | ok 2026-02-15 02:27:19.495749 | 2026-02-15 02:27:19.495902 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-02-15 02:27:19.913820 | orchestrator | ok 2026-02-15 02:27:19.920340 | 2026-02-15 02:27:19.920456 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-02-15 02:27:20.366121 | orchestrator | ok 2026-02-15 02:27:20.375579 | 2026-02-15 02:27:20.375732 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-02-15 02:27:20.400677 | orchestrator | skipping: Conditional result was False 2026-02-15 02:27:20.410471 | 2026-02-15 02:27:20.410605 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-02-15 02:27:20.881224 | orchestrator -> localhost | changed 2026-02-15 02:27:20.895874 | 2026-02-15 02:27:20.895992 | TASK [add-build-sshkey : Add back temp key] 2026-02-15 02:27:21.265474 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/2104699b13df41e3896cc6406b0490a9/work/2104699b13df41e3896cc6406b0490a9_id_rsa (zuul-build-sshkey) 2026-02-15 02:27:21.265853 | orchestrator -> localhost | ok: Runtime: 0:00:00.020683 2026-02-15 02:27:21.277422 | 2026-02-15 02:27:21.277576 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-02-15 02:27:21.733053 | orchestrator | ok 2026-02-15 02:27:21.745234 | 2026-02-15 02:27:21.745381 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-02-15 02:27:21.781362 | orchestrator | skipping: Conditional result was False 2026-02-15 02:27:21.848932 | 2026-02-15 02:27:21.849135 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-02-15 02:27:22.342783 | orchestrator | ok 2026-02-15 02:27:22.358204 | 2026-02-15 02:27:22.358341 | TASK [validate-host : Define zuul_info_dir fact] 2026-02-15 02:27:22.401149 | orchestrator | ok 2026-02-15 02:27:22.408636 | 2026-02-15 02:27:22.408737 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-02-15 02:27:22.746909 | orchestrator -> localhost | ok 2026-02-15 02:27:22.755377 | 2026-02-15 02:27:22.755496 | TASK [validate-host : Collect information about the host] 2026-02-15 02:27:24.063248 | orchestrator | ok 2026-02-15 02:27:24.077667 | 2026-02-15 02:27:24.077786 | TASK [validate-host : Sanitize hostname] 2026-02-15 02:27:24.155569 | orchestrator | ok 2026-02-15 02:27:24.164887 | 2026-02-15 02:27:24.165054 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-02-15 02:27:24.747997 | orchestrator -> localhost | changed 2026-02-15 02:27:24.762812 | 2026-02-15 02:27:24.763062 | TASK [validate-host : Collect information about zuul worker] 2026-02-15 02:27:25.257552 | orchestrator | ok 2026-02-15 02:27:25.265204 | 2026-02-15 02:27:25.265341 | TASK [validate-host : Write out all zuul information for each host] 2026-02-15 02:27:25.831609 | orchestrator -> localhost | changed 2026-02-15 02:27:25.854141 | 2026-02-15 02:27:25.854279 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-02-15 02:27:26.185492 | orchestrator | ok 2026-02-15 02:27:26.199222 | 2026-02-15 02:27:26.199385 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-02-15 02:27:50.367265 | orchestrator | changed: 2026-02-15 02:27:50.367583 | orchestrator | .d..t...... src/ 2026-02-15 02:27:50.367634 | orchestrator | .d..t...... src/github.com/ 2026-02-15 02:27:50.367674 | orchestrator | .d..t...... src/github.com/osism/ 2026-02-15 02:27:50.367710 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-02-15 02:27:50.367745 | orchestrator | RedHat.yml 2026-02-15 02:27:50.389712 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-02-15 02:27:50.389730 | orchestrator | RedHat.yml 2026-02-15 02:27:50.389783 | orchestrator | = 2.2.0"... 2026-02-15 02:28:01.422187 | orchestrator | - Finding latest version of hashicorp/null... 2026-02-15 02:28:01.440633 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-02-15 02:28:01.877649 | orchestrator | - Installing hashicorp/local v2.6.2... 2026-02-15 02:28:02.897432 | orchestrator | - Installed hashicorp/local v2.6.2 (signed, key ID 0C0AF313E5FD9F80) 2026-02-15 02:28:02.961722 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-02-15 02:28:03.705944 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-02-15 02:28:04.055995 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-02-15 02:28:04.908594 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-02-15 02:28:04.908774 | orchestrator | 2026-02-15 02:28:04.908802 | orchestrator | Providers are signed by their developers. 2026-02-15 02:28:04.908815 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-02-15 02:28:04.908828 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-02-15 02:28:04.908845 | orchestrator | 2026-02-15 02:28:04.908858 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-02-15 02:28:04.908895 | orchestrator | selections it made above. Include this file in your version control repository 2026-02-15 02:28:04.908910 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-02-15 02:28:04.908923 | orchestrator | you run "tofu init" in the future. 2026-02-15 02:28:04.909192 | orchestrator | 2026-02-15 02:28:04.909222 | orchestrator | OpenTofu has been successfully initialized! 2026-02-15 02:28:04.909235 | orchestrator | 2026-02-15 02:28:04.909247 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-02-15 02:28:04.909260 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-02-15 02:28:04.909274 | orchestrator | should now work. 2026-02-15 02:28:04.909286 | orchestrator | 2026-02-15 02:28:04.909299 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-02-15 02:28:04.909311 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-02-15 02:28:04.909325 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-02-15 02:28:05.117039 | orchestrator | Created and switched to workspace "ci"! 2026-02-15 02:28:05.117114 | orchestrator | 2026-02-15 02:28:05.117123 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-02-15 02:28:05.117132 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-02-15 02:28:05.117140 | orchestrator | for this configuration. 2026-02-15 02:28:05.257223 | orchestrator | ci.auto.tfvars 2026-02-15 02:28:05.270699 | orchestrator | default_custom.tf 2026-02-15 02:28:06.403408 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-02-15 02:28:06.934174 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-02-15 02:28:07.207479 | orchestrator | 2026-02-15 02:28:07.207570 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-02-15 02:28:07.207580 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-02-15 02:28:07.207585 | orchestrator | + create 2026-02-15 02:28:07.207624 | orchestrator | <= read (data resources) 2026-02-15 02:28:07.207631 | orchestrator | 2026-02-15 02:28:07.207635 | orchestrator | OpenTofu will perform the following actions: 2026-02-15 02:28:07.207639 | orchestrator | 2026-02-15 02:28:07.207644 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-02-15 02:28:07.207648 | orchestrator | # (config refers to values not yet known) 2026-02-15 02:28:07.207652 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-02-15 02:28:07.207656 | orchestrator | + checksum = (known after apply) 2026-02-15 02:28:07.207661 | orchestrator | + created_at = (known after apply) 2026-02-15 02:28:07.207665 | orchestrator | + file = (known after apply) 2026-02-15 02:28:07.207669 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.207694 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.207699 | orchestrator | + min_disk_gb = (known after apply) 2026-02-15 02:28:07.207703 | orchestrator | + min_ram_mb = (known after apply) 2026-02-15 02:28:07.207707 | orchestrator | + most_recent = true 2026-02-15 02:28:07.207711 | orchestrator | + name = (known after apply) 2026-02-15 02:28:07.207715 | orchestrator | + protected = (known after apply) 2026-02-15 02:28:07.207719 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.207729 | orchestrator | + schema = (known after apply) 2026-02-15 02:28:07.207739 | orchestrator | + size_bytes = (known after apply) 2026-02-15 02:28:07.207746 | orchestrator | + tags = (known after apply) 2026-02-15 02:28:07.207752 | orchestrator | + updated_at = (known after apply) 2026-02-15 02:28:07.207758 | orchestrator | } 2026-02-15 02:28:07.207909 | orchestrator | 2026-02-15 02:28:07.207918 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-02-15 02:28:07.207922 | orchestrator | # (config refers to values not yet known) 2026-02-15 02:28:07.207926 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-02-15 02:28:07.207930 | orchestrator | + checksum = (known after apply) 2026-02-15 02:28:07.207934 | orchestrator | + created_at = (known after apply) 2026-02-15 02:28:07.207939 | orchestrator | + file = (known after apply) 2026-02-15 02:28:07.207942 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.207946 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.207950 | orchestrator | + min_disk_gb = (known after apply) 2026-02-15 02:28:07.207954 | orchestrator | + min_ram_mb = (known after apply) 2026-02-15 02:28:07.207958 | orchestrator | + most_recent = true 2026-02-15 02:28:07.207962 | orchestrator | + name = (known after apply) 2026-02-15 02:28:07.207968 | orchestrator | + protected = (known after apply) 2026-02-15 02:28:07.207976 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.207984 | orchestrator | + schema = (known after apply) 2026-02-15 02:28:07.207991 | orchestrator | + size_bytes = (known after apply) 2026-02-15 02:28:07.207996 | orchestrator | + tags = (known after apply) 2026-02-15 02:28:07.208002 | orchestrator | + updated_at = (known after apply) 2026-02-15 02:28:07.208008 | orchestrator | } 2026-02-15 02:28:07.208017 | orchestrator | 2026-02-15 02:28:07.208023 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-02-15 02:28:07.208029 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-02-15 02:28:07.208035 | orchestrator | + content = (known after apply) 2026-02-15 02:28:07.208042 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-15 02:28:07.208048 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-15 02:28:07.208054 | orchestrator | + content_md5 = (known after apply) 2026-02-15 02:28:07.208060 | orchestrator | + content_sha1 = (known after apply) 2026-02-15 02:28:07.208066 | orchestrator | + content_sha256 = (known after apply) 2026-02-15 02:28:07.208072 | orchestrator | + content_sha512 = (known after apply) 2026-02-15 02:28:07.208076 | orchestrator | + directory_permission = "0777" 2026-02-15 02:28:07.208080 | orchestrator | + file_permission = "0644" 2026-02-15 02:28:07.208084 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-02-15 02:28:07.208088 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208092 | orchestrator | } 2026-02-15 02:28:07.208098 | orchestrator | 2026-02-15 02:28:07.208101 | orchestrator | # local_file.id_rsa_pub will be created 2026-02-15 02:28:07.208105 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-02-15 02:28:07.208109 | orchestrator | + content = (known after apply) 2026-02-15 02:28:07.208113 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-15 02:28:07.208117 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-15 02:28:07.208121 | orchestrator | + content_md5 = (known after apply) 2026-02-15 02:28:07.208124 | orchestrator | + content_sha1 = (known after apply) 2026-02-15 02:28:07.208128 | orchestrator | + content_sha256 = (known after apply) 2026-02-15 02:28:07.208143 | orchestrator | + content_sha512 = (known after apply) 2026-02-15 02:28:07.208147 | orchestrator | + directory_permission = "0777" 2026-02-15 02:28:07.208151 | orchestrator | + file_permission = "0644" 2026-02-15 02:28:07.208162 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-02-15 02:28:07.208168 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208174 | orchestrator | } 2026-02-15 02:28:07.208181 | orchestrator | 2026-02-15 02:28:07.208186 | orchestrator | # local_file.inventory will be created 2026-02-15 02:28:07.208192 | orchestrator | + resource "local_file" "inventory" { 2026-02-15 02:28:07.208199 | orchestrator | + content = (known after apply) 2026-02-15 02:28:07.208203 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-15 02:28:07.208207 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-15 02:28:07.208211 | orchestrator | + content_md5 = (known after apply) 2026-02-15 02:28:07.208215 | orchestrator | + content_sha1 = (known after apply) 2026-02-15 02:28:07.208219 | orchestrator | + content_sha256 = (known after apply) 2026-02-15 02:28:07.208223 | orchestrator | + content_sha512 = (known after apply) 2026-02-15 02:28:07.208227 | orchestrator | + directory_permission = "0777" 2026-02-15 02:28:07.208231 | orchestrator | + file_permission = "0644" 2026-02-15 02:28:07.208235 | orchestrator | + filename = "inventory.ci" 2026-02-15 02:28:07.208238 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208244 | orchestrator | } 2026-02-15 02:28:07.208254 | orchestrator | 2026-02-15 02:28:07.208258 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-02-15 02:28:07.208262 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-02-15 02:28:07.208266 | orchestrator | + content = (sensitive value) 2026-02-15 02:28:07.208270 | orchestrator | + content_base64sha256 = (known after apply) 2026-02-15 02:28:07.208273 | orchestrator | + content_base64sha512 = (known after apply) 2026-02-15 02:28:07.208277 | orchestrator | + content_md5 = (known after apply) 2026-02-15 02:28:07.208281 | orchestrator | + content_sha1 = (known after apply) 2026-02-15 02:28:07.208284 | orchestrator | + content_sha256 = (known after apply) 2026-02-15 02:28:07.208288 | orchestrator | + content_sha512 = (known after apply) 2026-02-15 02:28:07.208292 | orchestrator | + directory_permission = "0700" 2026-02-15 02:28:07.208295 | orchestrator | + file_permission = "0600" 2026-02-15 02:28:07.208299 | orchestrator | + filename = ".id_rsa.ci" 2026-02-15 02:28:07.208303 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208307 | orchestrator | } 2026-02-15 02:28:07.208310 | orchestrator | 2026-02-15 02:28:07.208314 | orchestrator | # null_resource.node_semaphore will be created 2026-02-15 02:28:07.208318 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-02-15 02:28:07.208321 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208325 | orchestrator | } 2026-02-15 02:28:07.208329 | orchestrator | 2026-02-15 02:28:07.208333 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-02-15 02:28:07.208337 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-02-15 02:28:07.208341 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.208345 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.208348 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208352 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.208356 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.208360 | orchestrator | + name = "testbed-volume-manager-base" 2026-02-15 02:28:07.208363 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.208367 | orchestrator | + size = 80 2026-02-15 02:28:07.208371 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.208375 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.208378 | orchestrator | } 2026-02-15 02:28:07.208382 | orchestrator | 2026-02-15 02:28:07.208386 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-02-15 02:28:07.208390 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-15 02:28:07.208393 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.208397 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.208401 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208408 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.208412 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.208415 | orchestrator | + name = "testbed-volume-0-node-base" 2026-02-15 02:28:07.208419 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.208423 | orchestrator | + size = 80 2026-02-15 02:28:07.208426 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.208430 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.208434 | orchestrator | } 2026-02-15 02:28:07.208439 | orchestrator | 2026-02-15 02:28:07.208443 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-02-15 02:28:07.208447 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-15 02:28:07.208451 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.208455 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.208458 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208462 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.208466 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.208470 | orchestrator | + name = "testbed-volume-1-node-base" 2026-02-15 02:28:07.208473 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.208477 | orchestrator | + size = 80 2026-02-15 02:28:07.208481 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.208484 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.208488 | orchestrator | } 2026-02-15 02:28:07.208492 | orchestrator | 2026-02-15 02:28:07.208495 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-02-15 02:28:07.208499 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-15 02:28:07.208503 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.208507 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.208510 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208514 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.208518 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.208521 | orchestrator | + name = "testbed-volume-2-node-base" 2026-02-15 02:28:07.208525 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.208529 | orchestrator | + size = 80 2026-02-15 02:28:07.208536 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.208540 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.208544 | orchestrator | } 2026-02-15 02:28:07.208548 | orchestrator | 2026-02-15 02:28:07.208551 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-02-15 02:28:07.208555 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-15 02:28:07.208559 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.208562 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.208566 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208570 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.208574 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.208577 | orchestrator | + name = "testbed-volume-3-node-base" 2026-02-15 02:28:07.208581 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.208585 | orchestrator | + size = 80 2026-02-15 02:28:07.208588 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.208592 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.208617 | orchestrator | } 2026-02-15 02:28:07.208627 | orchestrator | 2026-02-15 02:28:07.208634 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-02-15 02:28:07.208640 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-15 02:28:07.208646 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.208652 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.208658 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208670 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.208677 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.208683 | orchestrator | + name = "testbed-volume-4-node-base" 2026-02-15 02:28:07.208690 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.208697 | orchestrator | + size = 80 2026-02-15 02:28:07.208703 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.208707 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.208711 | orchestrator | } 2026-02-15 02:28:07.208715 | orchestrator | 2026-02-15 02:28:07.208719 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-02-15 02:28:07.208726 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-02-15 02:28:07.208735 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.208742 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.208748 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208753 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.208759 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.208766 | orchestrator | + name = "testbed-volume-5-node-base" 2026-02-15 02:28:07.208773 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.208777 | orchestrator | + size = 80 2026-02-15 02:28:07.208781 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.208784 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.208788 | orchestrator | } 2026-02-15 02:28:07.208792 | orchestrator | 2026-02-15 02:28:07.208796 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-02-15 02:28:07.208800 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-15 02:28:07.208804 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.208808 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.208812 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208816 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.208820 | orchestrator | + name = "testbed-volume-0-node-3" 2026-02-15 02:28:07.208823 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.208827 | orchestrator | + size = 20 2026-02-15 02:28:07.208831 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.208835 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.208839 | orchestrator | } 2026-02-15 02:28:07.208842 | orchestrator | 2026-02-15 02:28:07.208846 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-02-15 02:28:07.208850 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-15 02:28:07.208854 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.208857 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.208861 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208865 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.208869 | orchestrator | + name = "testbed-volume-1-node-4" 2026-02-15 02:28:07.208872 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.208876 | orchestrator | + size = 20 2026-02-15 02:28:07.208880 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.208883 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.208887 | orchestrator | } 2026-02-15 02:28:07.208935 | orchestrator | 2026-02-15 02:28:07.208939 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-02-15 02:28:07.208943 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-15 02:28:07.208946 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.208950 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.208954 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.208958 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.208961 | orchestrator | + name = "testbed-volume-2-node-5" 2026-02-15 02:28:07.208965 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.208973 | orchestrator | + size = 20 2026-02-15 02:28:07.208977 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.208981 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.208985 | orchestrator | } 2026-02-15 02:28:07.208991 | orchestrator | 2026-02-15 02:28:07.208997 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-02-15 02:28:07.209008 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-15 02:28:07.209014 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.209020 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.209027 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.209038 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.209044 | orchestrator | + name = "testbed-volume-3-node-3" 2026-02-15 02:28:07.209051 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.209057 | orchestrator | + size = 20 2026-02-15 02:28:07.209063 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.209069 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.209076 | orchestrator | } 2026-02-15 02:28:07.209082 | orchestrator | 2026-02-15 02:28:07.209088 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-02-15 02:28:07.209094 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-15 02:28:07.209101 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.209108 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.209113 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.209117 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.209121 | orchestrator | + name = "testbed-volume-4-node-4" 2026-02-15 02:28:07.209125 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.209128 | orchestrator | + size = 20 2026-02-15 02:28:07.209132 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.209136 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.209140 | orchestrator | } 2026-02-15 02:28:07.209144 | orchestrator | 2026-02-15 02:28:07.209147 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-02-15 02:28:07.209151 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-15 02:28:07.209155 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.209159 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.209163 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.209166 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.209170 | orchestrator | + name = "testbed-volume-5-node-5" 2026-02-15 02:28:07.209174 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.209178 | orchestrator | + size = 20 2026-02-15 02:28:07.209182 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.209185 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.209189 | orchestrator | } 2026-02-15 02:28:07.209195 | orchestrator | 2026-02-15 02:28:07.209199 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-02-15 02:28:07.209203 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-15 02:28:07.209207 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.209211 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.209215 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.209218 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.209222 | orchestrator | + name = "testbed-volume-6-node-3" 2026-02-15 02:28:07.209226 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.209230 | orchestrator | + size = 20 2026-02-15 02:28:07.209233 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.209237 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.209241 | orchestrator | } 2026-02-15 02:28:07.209245 | orchestrator | 2026-02-15 02:28:07.209248 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-02-15 02:28:07.209252 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-15 02:28:07.209264 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.209268 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.209271 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.209275 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.209279 | orchestrator | + name = "testbed-volume-7-node-4" 2026-02-15 02:28:07.209283 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.209286 | orchestrator | + size = 20 2026-02-15 02:28:07.209290 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.209294 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.209298 | orchestrator | } 2026-02-15 02:28:07.209302 | orchestrator | 2026-02-15 02:28:07.209306 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-02-15 02:28:07.209310 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-02-15 02:28:07.209314 | orchestrator | + attachment = (known after apply) 2026-02-15 02:28:07.209317 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.209321 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.209325 | orchestrator | + metadata = (known after apply) 2026-02-15 02:28:07.209329 | orchestrator | + name = "testbed-volume-8-node-5" 2026-02-15 02:28:07.209333 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.209336 | orchestrator | + size = 20 2026-02-15 02:28:07.209340 | orchestrator | + volume_retype_policy = "never" 2026-02-15 02:28:07.209344 | orchestrator | + volume_type = "ssd" 2026-02-15 02:28:07.209348 | orchestrator | } 2026-02-15 02:28:07.209353 | orchestrator | 2026-02-15 02:28:07.209357 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-02-15 02:28:07.209361 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-02-15 02:28:07.209365 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-15 02:28:07.209368 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-15 02:28:07.209372 | orchestrator | + all_metadata = (known after apply) 2026-02-15 02:28:07.209376 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.209380 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.209383 | orchestrator | + config_drive = true 2026-02-15 02:28:07.209390 | orchestrator | + created = (known after apply) 2026-02-15 02:28:07.209394 | orchestrator | + flavor_id = (known after apply) 2026-02-15 02:28:07.209398 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-02-15 02:28:07.209402 | orchestrator | + force_delete = false 2026-02-15 02:28:07.209405 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-15 02:28:07.209409 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.209413 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.209417 | orchestrator | + image_name = (known after apply) 2026-02-15 02:28:07.209420 | orchestrator | + key_pair = "testbed" 2026-02-15 02:28:07.209424 | orchestrator | + name = "testbed-manager" 2026-02-15 02:28:07.209428 | orchestrator | + power_state = "active" 2026-02-15 02:28:07.209431 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.209435 | orchestrator | + security_groups = (known after apply) 2026-02-15 02:28:07.209439 | orchestrator | + stop_before_destroy = false 2026-02-15 02:28:07.209443 | orchestrator | + updated = (known after apply) 2026-02-15 02:28:07.209446 | orchestrator | + user_data = (sensitive value) 2026-02-15 02:28:07.209450 | orchestrator | 2026-02-15 02:28:07.209454 | orchestrator | + block_device { 2026-02-15 02:28:07.209458 | orchestrator | + boot_index = 0 2026-02-15 02:28:07.209462 | orchestrator | + delete_on_termination = false 2026-02-15 02:28:07.209466 | orchestrator | + destination_type = "volume" 2026-02-15 02:28:07.209469 | orchestrator | + multiattach = false 2026-02-15 02:28:07.209474 | orchestrator | + source_type = "volume" 2026-02-15 02:28:07.209480 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.209492 | orchestrator | } 2026-02-15 02:28:07.209496 | orchestrator | 2026-02-15 02:28:07.209500 | orchestrator | + network { 2026-02-15 02:28:07.209504 | orchestrator | + access_network = false 2026-02-15 02:28:07.209508 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-15 02:28:07.209515 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-15 02:28:07.209521 | orchestrator | + mac = (known after apply) 2026-02-15 02:28:07.209526 | orchestrator | + name = (known after apply) 2026-02-15 02:28:07.209565 | orchestrator | + port = (known after apply) 2026-02-15 02:28:07.209570 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.209573 | orchestrator | } 2026-02-15 02:28:07.209577 | orchestrator | } 2026-02-15 02:28:07.209583 | orchestrator | 2026-02-15 02:28:07.209587 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-02-15 02:28:07.209591 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-15 02:28:07.209608 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-15 02:28:07.209616 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-15 02:28:07.209620 | orchestrator | + all_metadata = (known after apply) 2026-02-15 02:28:07.209624 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.209627 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.209631 | orchestrator | + config_drive = true 2026-02-15 02:28:07.209635 | orchestrator | + created = (known after apply) 2026-02-15 02:28:07.209639 | orchestrator | + flavor_id = (known after apply) 2026-02-15 02:28:07.209643 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-15 02:28:07.209647 | orchestrator | + force_delete = false 2026-02-15 02:28:07.209650 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-15 02:28:07.209654 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.209658 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.209662 | orchestrator | + image_name = (known after apply) 2026-02-15 02:28:07.209666 | orchestrator | + key_pair = "testbed" 2026-02-15 02:28:07.209669 | orchestrator | + name = "testbed-node-0" 2026-02-15 02:28:07.209673 | orchestrator | + power_state = "active" 2026-02-15 02:28:07.209677 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.209681 | orchestrator | + security_groups = (known after apply) 2026-02-15 02:28:07.209684 | orchestrator | + stop_before_destroy = false 2026-02-15 02:28:07.209688 | orchestrator | + updated = (known after apply) 2026-02-15 02:28:07.209692 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-15 02:28:07.209696 | orchestrator | 2026-02-15 02:28:07.209699 | orchestrator | + block_device { 2026-02-15 02:28:07.209703 | orchestrator | + boot_index = 0 2026-02-15 02:28:07.209707 | orchestrator | + delete_on_termination = false 2026-02-15 02:28:07.209711 | orchestrator | + destination_type = "volume" 2026-02-15 02:28:07.209714 | orchestrator | + multiattach = false 2026-02-15 02:28:07.209719 | orchestrator | + source_type = "volume" 2026-02-15 02:28:07.209725 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.209734 | orchestrator | } 2026-02-15 02:28:07.209741 | orchestrator | 2026-02-15 02:28:07.209747 | orchestrator | + network { 2026-02-15 02:28:07.209752 | orchestrator | + access_network = false 2026-02-15 02:28:07.209759 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-15 02:28:07.209765 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-15 02:28:07.209772 | orchestrator | + mac = (known after apply) 2026-02-15 02:28:07.209775 | orchestrator | + name = (known after apply) 2026-02-15 02:28:07.209779 | orchestrator | + port = (known after apply) 2026-02-15 02:28:07.209783 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.209786 | orchestrator | } 2026-02-15 02:28:07.209790 | orchestrator | } 2026-02-15 02:28:07.209797 | orchestrator | 2026-02-15 02:28:07.209801 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-02-15 02:28:07.209804 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-15 02:28:07.209808 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-15 02:28:07.209824 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-15 02:28:07.209828 | orchestrator | + all_metadata = (known after apply) 2026-02-15 02:28:07.209832 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.209835 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.209839 | orchestrator | + config_drive = true 2026-02-15 02:28:07.209843 | orchestrator | + created = (known after apply) 2026-02-15 02:28:07.209846 | orchestrator | + flavor_id = (known after apply) 2026-02-15 02:28:07.209850 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-15 02:28:07.209854 | orchestrator | + force_delete = false 2026-02-15 02:28:07.209858 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-15 02:28:07.209861 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.209865 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.209869 | orchestrator | + image_name = (known after apply) 2026-02-15 02:28:07.209872 | orchestrator | + key_pair = "testbed" 2026-02-15 02:28:07.209876 | orchestrator | + name = "testbed-node-1" 2026-02-15 02:28:07.209880 | orchestrator | + power_state = "active" 2026-02-15 02:28:07.209883 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.209887 | orchestrator | + security_groups = (known after apply) 2026-02-15 02:28:07.209891 | orchestrator | + stop_before_destroy = false 2026-02-15 02:28:07.209894 | orchestrator | + updated = (known after apply) 2026-02-15 02:28:07.209901 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-15 02:28:07.209907 | orchestrator | 2026-02-15 02:28:07.209913 | orchestrator | + block_device { 2026-02-15 02:28:07.209919 | orchestrator | + boot_index = 0 2026-02-15 02:28:07.209925 | orchestrator | + delete_on_termination = false 2026-02-15 02:28:07.209931 | orchestrator | + destination_type = "volume" 2026-02-15 02:28:07.209937 | orchestrator | + multiattach = false 2026-02-15 02:28:07.209944 | orchestrator | + source_type = "volume" 2026-02-15 02:28:07.209948 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.209951 | orchestrator | } 2026-02-15 02:28:07.209955 | orchestrator | 2026-02-15 02:28:07.209959 | orchestrator | + network { 2026-02-15 02:28:07.209963 | orchestrator | + access_network = false 2026-02-15 02:28:07.209966 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-15 02:28:07.209970 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-15 02:28:07.209974 | orchestrator | + mac = (known after apply) 2026-02-15 02:28:07.209978 | orchestrator | + name = (known after apply) 2026-02-15 02:28:07.209981 | orchestrator | + port = (known after apply) 2026-02-15 02:28:07.209985 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.209989 | orchestrator | } 2026-02-15 02:28:07.209993 | orchestrator | } 2026-02-15 02:28:07.209996 | orchestrator | 2026-02-15 02:28:07.210000 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-02-15 02:28:07.210004 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-15 02:28:07.210008 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-15 02:28:07.210011 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-15 02:28:07.210045 | orchestrator | + all_metadata = (known after apply) 2026-02-15 02:28:07.210049 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.210053 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.210057 | orchestrator | + config_drive = true 2026-02-15 02:28:07.210061 | orchestrator | + created = (known after apply) 2026-02-15 02:28:07.210064 | orchestrator | + flavor_id = (known after apply) 2026-02-15 02:28:07.210068 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-15 02:28:07.210072 | orchestrator | + force_delete = false 2026-02-15 02:28:07.210076 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-15 02:28:07.210079 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.210083 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.210091 | orchestrator | + image_name = (known after apply) 2026-02-15 02:28:07.210094 | orchestrator | + key_pair = "testbed" 2026-02-15 02:28:07.210098 | orchestrator | + name = "testbed-node-2" 2026-02-15 02:28:07.210102 | orchestrator | + power_state = "active" 2026-02-15 02:28:07.210105 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.210109 | orchestrator | + security_groups = (known after apply) 2026-02-15 02:28:07.210113 | orchestrator | + stop_before_destroy = false 2026-02-15 02:28:07.210117 | orchestrator | + updated = (known after apply) 2026-02-15 02:28:07.210120 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-15 02:28:07.210124 | orchestrator | 2026-02-15 02:28:07.210128 | orchestrator | + block_device { 2026-02-15 02:28:07.210132 | orchestrator | + boot_index = 0 2026-02-15 02:28:07.210136 | orchestrator | + delete_on_termination = false 2026-02-15 02:28:07.210139 | orchestrator | + destination_type = "volume" 2026-02-15 02:28:07.210143 | orchestrator | + multiattach = false 2026-02-15 02:28:07.210147 | orchestrator | + source_type = "volume" 2026-02-15 02:28:07.210150 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.210154 | orchestrator | } 2026-02-15 02:28:07.210158 | orchestrator | 2026-02-15 02:28:07.210162 | orchestrator | + network { 2026-02-15 02:28:07.210165 | orchestrator | + access_network = false 2026-02-15 02:28:07.210169 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-15 02:28:07.210173 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-15 02:28:07.210177 | orchestrator | + mac = (known after apply) 2026-02-15 02:28:07.210180 | orchestrator | + name = (known after apply) 2026-02-15 02:28:07.210184 | orchestrator | + port = (known after apply) 2026-02-15 02:28:07.210188 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.210191 | orchestrator | } 2026-02-15 02:28:07.210195 | orchestrator | } 2026-02-15 02:28:07.210202 | orchestrator | 2026-02-15 02:28:07.210209 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-02-15 02:28:07.210213 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-15 02:28:07.210217 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-15 02:28:07.210221 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-15 02:28:07.210224 | orchestrator | + all_metadata = (known after apply) 2026-02-15 02:28:07.210228 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.210232 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.210235 | orchestrator | + config_drive = true 2026-02-15 02:28:07.210239 | orchestrator | + created = (known after apply) 2026-02-15 02:28:07.210243 | orchestrator | + flavor_id = (known after apply) 2026-02-15 02:28:07.210247 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-15 02:28:07.210251 | orchestrator | + force_delete = false 2026-02-15 02:28:07.210254 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-15 02:28:07.210258 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.210262 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.210265 | orchestrator | + image_name = (known after apply) 2026-02-15 02:28:07.210269 | orchestrator | + key_pair = "testbed" 2026-02-15 02:28:07.210273 | orchestrator | + name = "testbed-node-3" 2026-02-15 02:28:07.210277 | orchestrator | + power_state = "active" 2026-02-15 02:28:07.210280 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.210284 | orchestrator | + security_groups = (known after apply) 2026-02-15 02:28:07.210288 | orchestrator | + stop_before_destroy = false 2026-02-15 02:28:07.210291 | orchestrator | + updated = (known after apply) 2026-02-15 02:28:07.210295 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-15 02:28:07.210299 | orchestrator | 2026-02-15 02:28:07.210303 | orchestrator | + block_device { 2026-02-15 02:28:07.210306 | orchestrator | + boot_index = 0 2026-02-15 02:28:07.210310 | orchestrator | + delete_on_termination = false 2026-02-15 02:28:07.210314 | orchestrator | + destination_type = "volume" 2026-02-15 02:28:07.210320 | orchestrator | + multiattach = false 2026-02-15 02:28:07.210324 | orchestrator | + source_type = "volume" 2026-02-15 02:28:07.210328 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.210331 | orchestrator | } 2026-02-15 02:28:07.210335 | orchestrator | 2026-02-15 02:28:07.210339 | orchestrator | + network { 2026-02-15 02:28:07.210343 | orchestrator | + access_network = false 2026-02-15 02:28:07.210346 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-15 02:28:07.210350 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-15 02:28:07.210354 | orchestrator | + mac = (known after apply) 2026-02-15 02:28:07.210360 | orchestrator | + name = (known after apply) 2026-02-15 02:28:07.210369 | orchestrator | + port = (known after apply) 2026-02-15 02:28:07.210377 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.210383 | orchestrator | } 2026-02-15 02:28:07.210389 | orchestrator | } 2026-02-15 02:28:07.210395 | orchestrator | 2026-02-15 02:28:07.210402 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-02-15 02:28:07.210409 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-15 02:28:07.210415 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-15 02:28:07.210421 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-15 02:28:07.210427 | orchestrator | + all_metadata = (known after apply) 2026-02-15 02:28:07.210433 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.210440 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.210446 | orchestrator | + config_drive = true 2026-02-15 02:28:07.210451 | orchestrator | + created = (known after apply) 2026-02-15 02:28:07.210458 | orchestrator | + flavor_id = (known after apply) 2026-02-15 02:28:07.210464 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-15 02:28:07.210469 | orchestrator | + force_delete = false 2026-02-15 02:28:07.210475 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-15 02:28:07.210481 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.210487 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.210494 | orchestrator | + image_name = (known after apply) 2026-02-15 02:28:07.210501 | orchestrator | + key_pair = "testbed" 2026-02-15 02:28:07.210507 | orchestrator | + name = "testbed-node-4" 2026-02-15 02:28:07.210513 | orchestrator | + power_state = "active" 2026-02-15 02:28:07.210520 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.210524 | orchestrator | + security_groups = (known after apply) 2026-02-15 02:28:07.210528 | orchestrator | + stop_before_destroy = false 2026-02-15 02:28:07.210532 | orchestrator | + updated = (known after apply) 2026-02-15 02:28:07.210536 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-15 02:28:07.210540 | orchestrator | 2026-02-15 02:28:07.210544 | orchestrator | + block_device { 2026-02-15 02:28:07.210548 | orchestrator | + boot_index = 0 2026-02-15 02:28:07.210552 | orchestrator | + delete_on_termination = false 2026-02-15 02:28:07.210555 | orchestrator | + destination_type = "volume" 2026-02-15 02:28:07.210559 | orchestrator | + multiattach = false 2026-02-15 02:28:07.210563 | orchestrator | + source_type = "volume" 2026-02-15 02:28:07.210567 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.210570 | orchestrator | } 2026-02-15 02:28:07.210574 | orchestrator | 2026-02-15 02:28:07.210578 | orchestrator | + network { 2026-02-15 02:28:07.210581 | orchestrator | + access_network = false 2026-02-15 02:28:07.210585 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-15 02:28:07.210589 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-15 02:28:07.210593 | orchestrator | + mac = (known after apply) 2026-02-15 02:28:07.210637 | orchestrator | + name = (known after apply) 2026-02-15 02:28:07.210645 | orchestrator | + port = (known after apply) 2026-02-15 02:28:07.210650 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.210657 | orchestrator | } 2026-02-15 02:28:07.210664 | orchestrator | } 2026-02-15 02:28:07.210677 | orchestrator | 2026-02-15 02:28:07.210681 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-02-15 02:28:07.210685 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-02-15 02:28:07.210689 | orchestrator | + access_ip_v4 = (known after apply) 2026-02-15 02:28:07.210693 | orchestrator | + access_ip_v6 = (known after apply) 2026-02-15 02:28:07.210697 | orchestrator | + all_metadata = (known after apply) 2026-02-15 02:28:07.210700 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.210704 | orchestrator | + availability_zone = "nova" 2026-02-15 02:28:07.210708 | orchestrator | + config_drive = true 2026-02-15 02:28:07.210713 | orchestrator | + created = (known after apply) 2026-02-15 02:28:07.210716 | orchestrator | + flavor_id = (known after apply) 2026-02-15 02:28:07.210722 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-02-15 02:28:07.210728 | orchestrator | + force_delete = false 2026-02-15 02:28:07.210740 | orchestrator | + hypervisor_hostname = (known after apply) 2026-02-15 02:28:07.210746 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.210752 | orchestrator | + image_id = (known after apply) 2026-02-15 02:28:07.210758 | orchestrator | + image_name = (known after apply) 2026-02-15 02:28:07.210764 | orchestrator | + key_pair = "testbed" 2026-02-15 02:28:07.210771 | orchestrator | + name = "testbed-node-5" 2026-02-15 02:28:07.210778 | orchestrator | + power_state = "active" 2026-02-15 02:28:07.210784 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.210790 | orchestrator | + security_groups = (known after apply) 2026-02-15 02:28:07.210793 | orchestrator | + stop_before_destroy = false 2026-02-15 02:28:07.210797 | orchestrator | + updated = (known after apply) 2026-02-15 02:28:07.210801 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-02-15 02:28:07.210805 | orchestrator | 2026-02-15 02:28:07.210809 | orchestrator | + block_device { 2026-02-15 02:28:07.210813 | orchestrator | + boot_index = 0 2026-02-15 02:28:07.210817 | orchestrator | + delete_on_termination = false 2026-02-15 02:28:07.210821 | orchestrator | + destination_type = "volume" 2026-02-15 02:28:07.210824 | orchestrator | + multiattach = false 2026-02-15 02:28:07.210828 | orchestrator | + source_type = "volume" 2026-02-15 02:28:07.210832 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.210835 | orchestrator | } 2026-02-15 02:28:07.210839 | orchestrator | 2026-02-15 02:28:07.210843 | orchestrator | + network { 2026-02-15 02:28:07.210847 | orchestrator | + access_network = false 2026-02-15 02:28:07.210851 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-02-15 02:28:07.210855 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-02-15 02:28:07.210859 | orchestrator | + mac = (known after apply) 2026-02-15 02:28:07.210862 | orchestrator | + name = (known after apply) 2026-02-15 02:28:07.210866 | orchestrator | + port = (known after apply) 2026-02-15 02:28:07.210870 | orchestrator | + uuid = (known after apply) 2026-02-15 02:28:07.210874 | orchestrator | } 2026-02-15 02:28:07.210877 | orchestrator | } 2026-02-15 02:28:07.210881 | orchestrator | 2026-02-15 02:28:07.210885 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-02-15 02:28:07.210889 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-02-15 02:28:07.210893 | orchestrator | + fingerprint = (known after apply) 2026-02-15 02:28:07.210896 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.210900 | orchestrator | + name = "testbed" 2026-02-15 02:28:07.210904 | orchestrator | + private_key = (sensitive value) 2026-02-15 02:28:07.210908 | orchestrator | + public_key = (known after apply) 2026-02-15 02:28:07.210912 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.210915 | orchestrator | + user_id = (known after apply) 2026-02-15 02:28:07.210919 | orchestrator | } 2026-02-15 02:28:07.210923 | orchestrator | 2026-02-15 02:28:07.210927 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-02-15 02:28:07.210931 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-15 02:28:07.210940 | orchestrator | + device = (known after apply) 2026-02-15 02:28:07.210944 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.210948 | orchestrator | + instance_id = (known after apply) 2026-02-15 02:28:07.210951 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.210959 | orchestrator | + volume_id = (known after apply) 2026-02-15 02:28:07.210964 | orchestrator | } 2026-02-15 02:28:07.210967 | orchestrator | 2026-02-15 02:28:07.210971 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-02-15 02:28:07.210975 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-15 02:28:07.210979 | orchestrator | + device = (known after apply) 2026-02-15 02:28:07.210983 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.210986 | orchestrator | + instance_id = (known after apply) 2026-02-15 02:28:07.210990 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.210994 | orchestrator | + volume_id = (known after apply) 2026-02-15 02:28:07.210998 | orchestrator | } 2026-02-15 02:28:07.211001 | orchestrator | 2026-02-15 02:28:07.211005 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-02-15 02:28:07.211009 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-15 02:28:07.211013 | orchestrator | + device = (known after apply) 2026-02-15 02:28:07.211017 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211020 | orchestrator | + instance_id = (known after apply) 2026-02-15 02:28:07.211024 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211028 | orchestrator | + volume_id = (known after apply) 2026-02-15 02:28:07.211032 | orchestrator | } 2026-02-15 02:28:07.211035 | orchestrator | 2026-02-15 02:28:07.211039 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-02-15 02:28:07.211043 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-15 02:28:07.211047 | orchestrator | + device = (known after apply) 2026-02-15 02:28:07.211051 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211054 | orchestrator | + instance_id = (known after apply) 2026-02-15 02:28:07.211058 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211062 | orchestrator | + volume_id = (known after apply) 2026-02-15 02:28:07.211065 | orchestrator | } 2026-02-15 02:28:07.211069 | orchestrator | 2026-02-15 02:28:07.211073 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-02-15 02:28:07.211077 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-15 02:28:07.211081 | orchestrator | + device = (known after apply) 2026-02-15 02:28:07.211085 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211089 | orchestrator | + instance_id = (known after apply) 2026-02-15 02:28:07.211093 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211100 | orchestrator | + volume_id = (known after apply) 2026-02-15 02:28:07.211111 | orchestrator | } 2026-02-15 02:28:07.211117 | orchestrator | 2026-02-15 02:28:07.211123 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-02-15 02:28:07.211130 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-15 02:28:07.211135 | orchestrator | + device = (known after apply) 2026-02-15 02:28:07.211141 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211147 | orchestrator | + instance_id = (known after apply) 2026-02-15 02:28:07.211153 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211160 | orchestrator | + volume_id = (known after apply) 2026-02-15 02:28:07.211166 | orchestrator | } 2026-02-15 02:28:07.211171 | orchestrator | 2026-02-15 02:28:07.211176 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-02-15 02:28:07.211182 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-15 02:28:07.211188 | orchestrator | + device = (known after apply) 2026-02-15 02:28:07.211193 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211198 | orchestrator | + instance_id = (known after apply) 2026-02-15 02:28:07.211205 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211216 | orchestrator | + volume_id = (known after apply) 2026-02-15 02:28:07.211223 | orchestrator | } 2026-02-15 02:28:07.211229 | orchestrator | 2026-02-15 02:28:07.211235 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-02-15 02:28:07.211242 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-15 02:28:07.211248 | orchestrator | + device = (known after apply) 2026-02-15 02:28:07.211255 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211263 | orchestrator | + instance_id = (known after apply) 2026-02-15 02:28:07.211267 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211271 | orchestrator | + volume_id = (known after apply) 2026-02-15 02:28:07.211274 | orchestrator | } 2026-02-15 02:28:07.211278 | orchestrator | 2026-02-15 02:28:07.211282 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-02-15 02:28:07.211286 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-02-15 02:28:07.211290 | orchestrator | + device = (known after apply) 2026-02-15 02:28:07.211293 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211297 | orchestrator | + instance_id = (known after apply) 2026-02-15 02:28:07.211301 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211305 | orchestrator | + volume_id = (known after apply) 2026-02-15 02:28:07.211309 | orchestrator | } 2026-02-15 02:28:07.211313 | orchestrator | 2026-02-15 02:28:07.211317 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-02-15 02:28:07.211322 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-02-15 02:28:07.211326 | orchestrator | + fixed_ip = (known after apply) 2026-02-15 02:28:07.211329 | orchestrator | + floating_ip = (known after apply) 2026-02-15 02:28:07.211333 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211337 | orchestrator | + port_id = (known after apply) 2026-02-15 02:28:07.211341 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211345 | orchestrator | } 2026-02-15 02:28:07.211348 | orchestrator | 2026-02-15 02:28:07.211352 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-02-15 02:28:07.211356 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-02-15 02:28:07.211360 | orchestrator | + address = (known after apply) 2026-02-15 02:28:07.211364 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.211372 | orchestrator | + dns_domain = (known after apply) 2026-02-15 02:28:07.211376 | orchestrator | + dns_name = (known after apply) 2026-02-15 02:28:07.211380 | orchestrator | + fixed_ip = (known after apply) 2026-02-15 02:28:07.211383 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211387 | orchestrator | + pool = "public" 2026-02-15 02:28:07.211391 | orchestrator | + port_id = (known after apply) 2026-02-15 02:28:07.211395 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211398 | orchestrator | + subnet_id = (known after apply) 2026-02-15 02:28:07.211402 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.211406 | orchestrator | } 2026-02-15 02:28:07.211410 | orchestrator | 2026-02-15 02:28:07.211414 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-02-15 02:28:07.211418 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-02-15 02:28:07.211421 | orchestrator | + admin_state_up = (known after apply) 2026-02-15 02:28:07.211425 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.211429 | orchestrator | + availability_zone_hints = [ 2026-02-15 02:28:07.211433 | orchestrator | + "nova", 2026-02-15 02:28:07.211437 | orchestrator | ] 2026-02-15 02:28:07.211441 | orchestrator | + dns_domain = (known after apply) 2026-02-15 02:28:07.211445 | orchestrator | + external = (known after apply) 2026-02-15 02:28:07.211449 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211452 | orchestrator | + mtu = (known after apply) 2026-02-15 02:28:07.211456 | orchestrator | + name = "net-testbed-management" 2026-02-15 02:28:07.211460 | orchestrator | + port_security_enabled = (known after apply) 2026-02-15 02:28:07.211467 | orchestrator | + qos_policy_id = (known after apply) 2026-02-15 02:28:07.211471 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211475 | orchestrator | + shared = (known after apply) 2026-02-15 02:28:07.211479 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.211483 | orchestrator | + transparent_vlan = (known after apply) 2026-02-15 02:28:07.211486 | orchestrator | 2026-02-15 02:28:07.211490 | orchestrator | + segments (known after apply) 2026-02-15 02:28:07.211494 | orchestrator | } 2026-02-15 02:28:07.211498 | orchestrator | 2026-02-15 02:28:07.211502 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-02-15 02:28:07.211506 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-02-15 02:28:07.211509 | orchestrator | + admin_state_up = (known after apply) 2026-02-15 02:28:07.211513 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-15 02:28:07.211517 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-15 02:28:07.211521 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.211524 | orchestrator | + device_id = (known after apply) 2026-02-15 02:28:07.211528 | orchestrator | + device_owner = (known after apply) 2026-02-15 02:28:07.211532 | orchestrator | + dns_assignment = (known after apply) 2026-02-15 02:28:07.211536 | orchestrator | + dns_name = (known after apply) 2026-02-15 02:28:07.211541 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211551 | orchestrator | + mac_address = (known after apply) 2026-02-15 02:28:07.211557 | orchestrator | + network_id = (known after apply) 2026-02-15 02:28:07.211561 | orchestrator | + port_security_enabled = (known after apply) 2026-02-15 02:28:07.211565 | orchestrator | + qos_policy_id = (known after apply) 2026-02-15 02:28:07.211569 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211573 | orchestrator | + security_group_ids = (known after apply) 2026-02-15 02:28:07.211576 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.211581 | orchestrator | 2026-02-15 02:28:07.211587 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.211611 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-15 02:28:07.211618 | orchestrator | } 2026-02-15 02:28:07.211625 | orchestrator | 2026-02-15 02:28:07.211632 | orchestrator | + binding (known after apply) 2026-02-15 02:28:07.211638 | orchestrator | 2026-02-15 02:28:07.211644 | orchestrator | + fixed_ip { 2026-02-15 02:28:07.211650 | orchestrator | + ip_address = "192.168.16.5" 2026-02-15 02:28:07.211656 | orchestrator | + subnet_id = (known after apply) 2026-02-15 02:28:07.211660 | orchestrator | } 2026-02-15 02:28:07.211663 | orchestrator | } 2026-02-15 02:28:07.211667 | orchestrator | 2026-02-15 02:28:07.211671 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-02-15 02:28:07.211675 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-15 02:28:07.211679 | orchestrator | + admin_state_up = (known after apply) 2026-02-15 02:28:07.211683 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-15 02:28:07.211687 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-15 02:28:07.211691 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.211695 | orchestrator | + device_id = (known after apply) 2026-02-15 02:28:07.211699 | orchestrator | + device_owner = (known after apply) 2026-02-15 02:28:07.211703 | orchestrator | + dns_assignment = (known after apply) 2026-02-15 02:28:07.211706 | orchestrator | + dns_name = (known after apply) 2026-02-15 02:28:07.211710 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211714 | orchestrator | + mac_address = (known after apply) 2026-02-15 02:28:07.211718 | orchestrator | + network_id = (known after apply) 2026-02-15 02:28:07.211724 | orchestrator | + port_security_enabled = (known after apply) 2026-02-15 02:28:07.211730 | orchestrator | + qos_policy_id = (known after apply) 2026-02-15 02:28:07.211736 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211748 | orchestrator | + security_group_ids = (known after apply) 2026-02-15 02:28:07.211754 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.211760 | orchestrator | 2026-02-15 02:28:07.211766 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.211772 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-15 02:28:07.211778 | orchestrator | } 2026-02-15 02:28:07.211784 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.211791 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-15 02:28:07.211797 | orchestrator | } 2026-02-15 02:28:07.211803 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.211810 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-15 02:28:07.211815 | orchestrator | } 2026-02-15 02:28:07.211819 | orchestrator | 2026-02-15 02:28:07.211823 | orchestrator | + binding (known after apply) 2026-02-15 02:28:07.211826 | orchestrator | 2026-02-15 02:28:07.211830 | orchestrator | + fixed_ip { 2026-02-15 02:28:07.211834 | orchestrator | + ip_address = "192.168.16.10" 2026-02-15 02:28:07.211837 | orchestrator | + subnet_id = (known after apply) 2026-02-15 02:28:07.211841 | orchestrator | } 2026-02-15 02:28:07.211845 | orchestrator | } 2026-02-15 02:28:07.211849 | orchestrator | 2026-02-15 02:28:07.211852 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-02-15 02:28:07.211856 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-15 02:28:07.211864 | orchestrator | + admin_state_up = (known after apply) 2026-02-15 02:28:07.211868 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-15 02:28:07.211872 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-15 02:28:07.211876 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.211879 | orchestrator | + device_id = (known after apply) 2026-02-15 02:28:07.211883 | orchestrator | + device_owner = (known after apply) 2026-02-15 02:28:07.211887 | orchestrator | + dns_assignment = (known after apply) 2026-02-15 02:28:07.211890 | orchestrator | + dns_name = (known after apply) 2026-02-15 02:28:07.211894 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.211898 | orchestrator | + mac_address = (known after apply) 2026-02-15 02:28:07.211902 | orchestrator | + network_id = (known after apply) 2026-02-15 02:28:07.211905 | orchestrator | + port_security_enabled = (known after apply) 2026-02-15 02:28:07.211909 | orchestrator | + qos_policy_id = (known after apply) 2026-02-15 02:28:07.211913 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.211916 | orchestrator | + security_group_ids = (known after apply) 2026-02-15 02:28:07.211920 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.211924 | orchestrator | 2026-02-15 02:28:07.211927 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.211931 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-15 02:28:07.211935 | orchestrator | } 2026-02-15 02:28:07.211939 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.211973 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-15 02:28:07.211978 | orchestrator | } 2026-02-15 02:28:07.211981 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.211985 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-15 02:28:07.211991 | orchestrator | } 2026-02-15 02:28:07.211997 | orchestrator | 2026-02-15 02:28:07.212003 | orchestrator | + binding (known after apply) 2026-02-15 02:28:07.212009 | orchestrator | 2026-02-15 02:28:07.212016 | orchestrator | + fixed_ip { 2026-02-15 02:28:07.212022 | orchestrator | + ip_address = "192.168.16.11" 2026-02-15 02:28:07.212028 | orchestrator | + subnet_id = (known after apply) 2026-02-15 02:28:07.212034 | orchestrator | } 2026-02-15 02:28:07.212040 | orchestrator | } 2026-02-15 02:28:07.212045 | orchestrator | 2026-02-15 02:28:07.212051 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-02-15 02:28:07.212056 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-15 02:28:07.212063 | orchestrator | + admin_state_up = (known after apply) 2026-02-15 02:28:07.212070 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-15 02:28:07.212075 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-15 02:28:07.212081 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.212095 | orchestrator | + device_id = (known after apply) 2026-02-15 02:28:07.212102 | orchestrator | + device_owner = (known after apply) 2026-02-15 02:28:07.212114 | orchestrator | + dns_assignment = (known after apply) 2026-02-15 02:28:07.212118 | orchestrator | + dns_name = (known after apply) 2026-02-15 02:28:07.212122 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.212126 | orchestrator | + mac_address = (known after apply) 2026-02-15 02:28:07.212134 | orchestrator | + network_id = (known after apply) 2026-02-15 02:28:07.212138 | orchestrator | + port_security_enabled = (known after apply) 2026-02-15 02:28:07.212141 | orchestrator | + qos_policy_id = (known after apply) 2026-02-15 02:28:07.212145 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.212149 | orchestrator | + security_group_ids = (known after apply) 2026-02-15 02:28:07.212152 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.212156 | orchestrator | 2026-02-15 02:28:07.212160 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212164 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-15 02:28:07.212168 | orchestrator | } 2026-02-15 02:28:07.212171 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212175 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-15 02:28:07.212179 | orchestrator | } 2026-02-15 02:28:07.212183 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212187 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-15 02:28:07.212190 | orchestrator | } 2026-02-15 02:28:07.212194 | orchestrator | 2026-02-15 02:28:07.212198 | orchestrator | + binding (known after apply) 2026-02-15 02:28:07.212202 | orchestrator | 2026-02-15 02:28:07.212205 | orchestrator | + fixed_ip { 2026-02-15 02:28:07.212209 | orchestrator | + ip_address = "192.168.16.12" 2026-02-15 02:28:07.212213 | orchestrator | + subnet_id = (known after apply) 2026-02-15 02:28:07.212217 | orchestrator | } 2026-02-15 02:28:07.212220 | orchestrator | } 2026-02-15 02:28:07.212224 | orchestrator | 2026-02-15 02:28:07.212228 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-02-15 02:28:07.212232 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-15 02:28:07.212235 | orchestrator | + admin_state_up = (known after apply) 2026-02-15 02:28:07.212239 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-15 02:28:07.212243 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-15 02:28:07.212247 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.212251 | orchestrator | + device_id = (known after apply) 2026-02-15 02:28:07.212254 | orchestrator | + device_owner = (known after apply) 2026-02-15 02:28:07.212258 | orchestrator | + dns_assignment = (known after apply) 2026-02-15 02:28:07.212262 | orchestrator | + dns_name = (known after apply) 2026-02-15 02:28:07.212265 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.212269 | orchestrator | + mac_address = (known after apply) 2026-02-15 02:28:07.212273 | orchestrator | + network_id = (known after apply) 2026-02-15 02:28:07.212276 | orchestrator | + port_security_enabled = (known after apply) 2026-02-15 02:28:07.212280 | orchestrator | + qos_policy_id = (known after apply) 2026-02-15 02:28:07.212284 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.212288 | orchestrator | + security_group_ids = (known after apply) 2026-02-15 02:28:07.212292 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.212296 | orchestrator | 2026-02-15 02:28:07.212299 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212304 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-15 02:28:07.212308 | orchestrator | } 2026-02-15 02:28:07.212312 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212316 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-15 02:28:07.212320 | orchestrator | } 2026-02-15 02:28:07.212323 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212327 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-15 02:28:07.212331 | orchestrator | } 2026-02-15 02:28:07.212335 | orchestrator | 2026-02-15 02:28:07.212342 | orchestrator | + binding (known after apply) 2026-02-15 02:28:07.212346 | orchestrator | 2026-02-15 02:28:07.212350 | orchestrator | + fixed_ip { 2026-02-15 02:28:07.212354 | orchestrator | + ip_address = "192.168.16.13" 2026-02-15 02:28:07.212358 | orchestrator | + subnet_id = (known after apply) 2026-02-15 02:28:07.212362 | orchestrator | } 2026-02-15 02:28:07.212365 | orchestrator | } 2026-02-15 02:28:07.212369 | orchestrator | 2026-02-15 02:28:07.212373 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-02-15 02:28:07.212377 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-15 02:28:07.212381 | orchestrator | + admin_state_up = (known after apply) 2026-02-15 02:28:07.212385 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-15 02:28:07.212388 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-15 02:28:07.212392 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.212396 | orchestrator | + device_id = (known after apply) 2026-02-15 02:28:07.212399 | orchestrator | + device_owner = (known after apply) 2026-02-15 02:28:07.212403 | orchestrator | + dns_assignment = (known after apply) 2026-02-15 02:28:07.212407 | orchestrator | + dns_name = (known after apply) 2026-02-15 02:28:07.212418 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.212422 | orchestrator | + mac_address = (known after apply) 2026-02-15 02:28:07.212426 | orchestrator | + network_id = (known after apply) 2026-02-15 02:28:07.212429 | orchestrator | + port_security_enabled = (known after apply) 2026-02-15 02:28:07.212433 | orchestrator | + qos_policy_id = (known after apply) 2026-02-15 02:28:07.212437 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.212441 | orchestrator | + security_group_ids = (known after apply) 2026-02-15 02:28:07.212444 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.212450 | orchestrator | 2026-02-15 02:28:07.212454 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212461 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-15 02:28:07.212467 | orchestrator | } 2026-02-15 02:28:07.212473 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212479 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-15 02:28:07.212483 | orchestrator | } 2026-02-15 02:28:07.212488 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212494 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-15 02:28:07.212500 | orchestrator | } 2026-02-15 02:28:07.212504 | orchestrator | 2026-02-15 02:28:07.212508 | orchestrator | + binding (known after apply) 2026-02-15 02:28:07.212512 | orchestrator | 2026-02-15 02:28:07.212516 | orchestrator | + fixed_ip { 2026-02-15 02:28:07.212520 | orchestrator | + ip_address = "192.168.16.14" 2026-02-15 02:28:07.212524 | orchestrator | + subnet_id = (known after apply) 2026-02-15 02:28:07.212527 | orchestrator | } 2026-02-15 02:28:07.212531 | orchestrator | } 2026-02-15 02:28:07.212535 | orchestrator | 2026-02-15 02:28:07.212539 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-02-15 02:28:07.212543 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-02-15 02:28:07.212547 | orchestrator | + admin_state_up = (known after apply) 2026-02-15 02:28:07.212551 | orchestrator | + all_fixed_ips = (known after apply) 2026-02-15 02:28:07.212554 | orchestrator | + all_security_group_ids = (known after apply) 2026-02-15 02:28:07.212558 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.212562 | orchestrator | + device_id = (known after apply) 2026-02-15 02:28:07.212566 | orchestrator | + device_owner = (known after apply) 2026-02-15 02:28:07.212569 | orchestrator | + dns_assignment = (known after apply) 2026-02-15 02:28:07.212577 | orchestrator | + dns_name = (known after apply) 2026-02-15 02:28:07.212580 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.212586 | orchestrator | + mac_address = (known after apply) 2026-02-15 02:28:07.212592 | orchestrator | + network_id = (known after apply) 2026-02-15 02:28:07.212636 | orchestrator | + port_security_enabled = (known after apply) 2026-02-15 02:28:07.212643 | orchestrator | + qos_policy_id = (known after apply) 2026-02-15 02:28:07.212656 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.212662 | orchestrator | + security_group_ids = (known after apply) 2026-02-15 02:28:07.212669 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.212676 | orchestrator | 2026-02-15 02:28:07.212682 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212690 | orchestrator | + ip_address = "192.168.16.254/32" 2026-02-15 02:28:07.212694 | orchestrator | } 2026-02-15 02:28:07.212698 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212702 | orchestrator | + ip_address = "192.168.16.8/32" 2026-02-15 02:28:07.212705 | orchestrator | } 2026-02-15 02:28:07.212709 | orchestrator | + allowed_address_pairs { 2026-02-15 02:28:07.212713 | orchestrator | + ip_address = "192.168.16.9/32" 2026-02-15 02:28:07.212717 | orchestrator | } 2026-02-15 02:28:07.212723 | orchestrator | 2026-02-15 02:28:07.212744 | orchestrator | + binding (known after apply) 2026-02-15 02:28:07.212753 | orchestrator | 2026-02-15 02:28:07.212760 | orchestrator | + fixed_ip { 2026-02-15 02:28:07.212766 | orchestrator | + ip_address = "192.168.16.15" 2026-02-15 02:28:07.212772 | orchestrator | + subnet_id = (known after apply) 2026-02-15 02:28:07.212779 | orchestrator | } 2026-02-15 02:28:07.212786 | orchestrator | } 2026-02-15 02:28:07.212792 | orchestrator | 2026-02-15 02:28:07.212798 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-02-15 02:28:07.212804 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-02-15 02:28:07.212808 | orchestrator | + force_destroy = false 2026-02-15 02:28:07.212812 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.212816 | orchestrator | + port_id = (known after apply) 2026-02-15 02:28:07.212820 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.212824 | orchestrator | + router_id = (known after apply) 2026-02-15 02:28:07.212828 | orchestrator | + subnet_id = (known after apply) 2026-02-15 02:28:07.212832 | orchestrator | } 2026-02-15 02:28:07.212836 | orchestrator | 2026-02-15 02:28:07.212839 | orchestrator | # openstack_networking_router_v2.router will be created 2026-02-15 02:28:07.212843 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-02-15 02:28:07.212847 | orchestrator | + admin_state_up = (known after apply) 2026-02-15 02:28:07.212851 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.212855 | orchestrator | + availability_zone_hints = [ 2026-02-15 02:28:07.212859 | orchestrator | + "nova", 2026-02-15 02:28:07.212863 | orchestrator | ] 2026-02-15 02:28:07.212867 | orchestrator | + distributed = (known after apply) 2026-02-15 02:28:07.212871 | orchestrator | + enable_snat = (known after apply) 2026-02-15 02:28:07.212875 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-02-15 02:28:07.212879 | orchestrator | + external_qos_policy_id = (known after apply) 2026-02-15 02:28:07.212883 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.212887 | orchestrator | + name = "testbed" 2026-02-15 02:28:07.212891 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.212895 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.212899 | orchestrator | 2026-02-15 02:28:07.212903 | orchestrator | + external_fixed_ip (known after apply) 2026-02-15 02:28:07.212909 | orchestrator | } 2026-02-15 02:28:07.212915 | orchestrator | 2026-02-15 02:28:07.212921 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-02-15 02:28:07.212928 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-02-15 02:28:07.212938 | orchestrator | + description = "ssh" 2026-02-15 02:28:07.212945 | orchestrator | + direction = "ingress" 2026-02-15 02:28:07.212951 | orchestrator | + ethertype = "IPv4" 2026-02-15 02:28:07.212956 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.212962 | orchestrator | + port_range_max = 22 2026-02-15 02:28:07.212968 | orchestrator | + port_range_min = 22 2026-02-15 02:28:07.212974 | orchestrator | + protocol = "tcp" 2026-02-15 02:28:07.212980 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.212992 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-15 02:28:07.212998 | orchestrator | + remote_group_id = (known after apply) 2026-02-15 02:28:07.213004 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-15 02:28:07.213010 | orchestrator | + security_group_id = (known after apply) 2026-02-15 02:28:07.213015 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213021 | orchestrator | } 2026-02-15 02:28:07.213027 | orchestrator | 2026-02-15 02:28:07.213033 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-02-15 02:28:07.213039 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-02-15 02:28:07.213045 | orchestrator | + description = "wireguard" 2026-02-15 02:28:07.213052 | orchestrator | + direction = "ingress" 2026-02-15 02:28:07.213058 | orchestrator | + ethertype = "IPv4" 2026-02-15 02:28:07.213062 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213066 | orchestrator | + port_range_max = 51820 2026-02-15 02:28:07.213069 | orchestrator | + port_range_min = 51820 2026-02-15 02:28:07.213073 | orchestrator | + protocol = "udp" 2026-02-15 02:28:07.213077 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213081 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-15 02:28:07.213085 | orchestrator | + remote_group_id = (known after apply) 2026-02-15 02:28:07.213088 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-15 02:28:07.213092 | orchestrator | + security_group_id = (known after apply) 2026-02-15 02:28:07.213096 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213100 | orchestrator | } 2026-02-15 02:28:07.213104 | orchestrator | 2026-02-15 02:28:07.213108 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-02-15 02:28:07.213111 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-02-15 02:28:07.213120 | orchestrator | + direction = "ingress" 2026-02-15 02:28:07.213125 | orchestrator | + ethertype = "IPv4" 2026-02-15 02:28:07.213128 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213132 | orchestrator | + protocol = "tcp" 2026-02-15 02:28:07.213136 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213145 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-15 02:28:07.213149 | orchestrator | + remote_group_id = (known after apply) 2026-02-15 02:28:07.213153 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-15 02:28:07.213157 | orchestrator | + security_group_id = (known after apply) 2026-02-15 02:28:07.213161 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213165 | orchestrator | } 2026-02-15 02:28:07.213168 | orchestrator | 2026-02-15 02:28:07.213172 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-02-15 02:28:07.213177 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-02-15 02:28:07.213183 | orchestrator | + direction = "ingress" 2026-02-15 02:28:07.213189 | orchestrator | + ethertype = "IPv4" 2026-02-15 02:28:07.213198 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213205 | orchestrator | + protocol = "udp" 2026-02-15 02:28:07.213213 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213219 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-15 02:28:07.213224 | orchestrator | + remote_group_id = (known after apply) 2026-02-15 02:28:07.213230 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-02-15 02:28:07.213237 | orchestrator | + security_group_id = (known after apply) 2026-02-15 02:28:07.213243 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213249 | orchestrator | } 2026-02-15 02:28:07.213255 | orchestrator | 2026-02-15 02:28:07.213261 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-02-15 02:28:07.213273 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-02-15 02:28:07.213279 | orchestrator | + direction = "ingress" 2026-02-15 02:28:07.213285 | orchestrator | + ethertype = "IPv4" 2026-02-15 02:28:07.213290 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213297 | orchestrator | + protocol = "icmp" 2026-02-15 02:28:07.213303 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213309 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-15 02:28:07.213315 | orchestrator | + remote_group_id = (known after apply) 2026-02-15 02:28:07.213321 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-15 02:28:07.213328 | orchestrator | + security_group_id = (known after apply) 2026-02-15 02:28:07.213334 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213338 | orchestrator | } 2026-02-15 02:28:07.213342 | orchestrator | 2026-02-15 02:28:07.213346 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-02-15 02:28:07.213350 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-02-15 02:28:07.213354 | orchestrator | + direction = "ingress" 2026-02-15 02:28:07.213358 | orchestrator | + ethertype = "IPv4" 2026-02-15 02:28:07.213361 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213365 | orchestrator | + protocol = "tcp" 2026-02-15 02:28:07.213369 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213373 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-15 02:28:07.213377 | orchestrator | + remote_group_id = (known after apply) 2026-02-15 02:28:07.213380 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-15 02:28:07.213384 | orchestrator | + security_group_id = (known after apply) 2026-02-15 02:28:07.213388 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213392 | orchestrator | } 2026-02-15 02:28:07.213396 | orchestrator | 2026-02-15 02:28:07.213399 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-02-15 02:28:07.213403 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-02-15 02:28:07.213407 | orchestrator | + direction = "ingress" 2026-02-15 02:28:07.213411 | orchestrator | + ethertype = "IPv4" 2026-02-15 02:28:07.213415 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213418 | orchestrator | + protocol = "udp" 2026-02-15 02:28:07.213422 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213427 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-15 02:28:07.213433 | orchestrator | + remote_group_id = (known after apply) 2026-02-15 02:28:07.213439 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-15 02:28:07.213445 | orchestrator | + security_group_id = (known after apply) 2026-02-15 02:28:07.213451 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213455 | orchestrator | } 2026-02-15 02:28:07.213459 | orchestrator | 2026-02-15 02:28:07.213462 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-02-15 02:28:07.213466 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-02-15 02:28:07.213470 | orchestrator | + direction = "ingress" 2026-02-15 02:28:07.213474 | orchestrator | + ethertype = "IPv4" 2026-02-15 02:28:07.213478 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213481 | orchestrator | + protocol = "icmp" 2026-02-15 02:28:07.213485 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213489 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-15 02:28:07.213496 | orchestrator | + remote_group_id = (known after apply) 2026-02-15 02:28:07.213503 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-15 02:28:07.213507 | orchestrator | + security_group_id = (known after apply) 2026-02-15 02:28:07.213511 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213520 | orchestrator | } 2026-02-15 02:28:07.213524 | orchestrator | 2026-02-15 02:28:07.213529 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-02-15 02:28:07.213533 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-02-15 02:28:07.213536 | orchestrator | + description = "vrrp" 2026-02-15 02:28:07.213540 | orchestrator | + direction = "ingress" 2026-02-15 02:28:07.213544 | orchestrator | + ethertype = "IPv4" 2026-02-15 02:28:07.213548 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213552 | orchestrator | + protocol = "112" 2026-02-15 02:28:07.213555 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213567 | orchestrator | + remote_address_group_id = (known after apply) 2026-02-15 02:28:07.213571 | orchestrator | + remote_group_id = (known after apply) 2026-02-15 02:28:07.213575 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-02-15 02:28:07.213579 | orchestrator | + security_group_id = (known after apply) 2026-02-15 02:28:07.213582 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213586 | orchestrator | } 2026-02-15 02:28:07.213590 | orchestrator | 2026-02-15 02:28:07.213594 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-02-15 02:28:07.213616 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-02-15 02:28:07.213620 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.213624 | orchestrator | + description = "management security group" 2026-02-15 02:28:07.213628 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213632 | orchestrator | + name = "testbed-management" 2026-02-15 02:28:07.213635 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213639 | orchestrator | + stateful = (known after apply) 2026-02-15 02:28:07.213643 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213646 | orchestrator | } 2026-02-15 02:28:07.213650 | orchestrator | 2026-02-15 02:28:07.213654 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-02-15 02:28:07.213658 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-02-15 02:28:07.213661 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.213665 | orchestrator | + description = "node security group" 2026-02-15 02:28:07.213669 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213673 | orchestrator | + name = "testbed-node" 2026-02-15 02:28:07.213676 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213680 | orchestrator | + stateful = (known after apply) 2026-02-15 02:28:07.213684 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213687 | orchestrator | } 2026-02-15 02:28:07.213691 | orchestrator | 2026-02-15 02:28:07.213695 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-02-15 02:28:07.213699 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-02-15 02:28:07.213702 | orchestrator | + all_tags = (known after apply) 2026-02-15 02:28:07.213706 | orchestrator | + cidr = "192.168.16.0/20" 2026-02-15 02:28:07.213710 | orchestrator | + dns_nameservers = [ 2026-02-15 02:28:07.213714 | orchestrator | + "8.8.8.8", 2026-02-15 02:28:07.213718 | orchestrator | + "9.9.9.9", 2026-02-15 02:28:07.213721 | orchestrator | ] 2026-02-15 02:28:07.213725 | orchestrator | + enable_dhcp = true 2026-02-15 02:28:07.213729 | orchestrator | + gateway_ip = (known after apply) 2026-02-15 02:28:07.213738 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213742 | orchestrator | + ip_version = 4 2026-02-15 02:28:07.213746 | orchestrator | + ipv6_address_mode = (known after apply) 2026-02-15 02:28:07.213749 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-02-15 02:28:07.213753 | orchestrator | + name = "subnet-testbed-management" 2026-02-15 02:28:07.213757 | orchestrator | + network_id = (known after apply) 2026-02-15 02:28:07.213761 | orchestrator | + no_gateway = false 2026-02-15 02:28:07.213765 | orchestrator | + region = (known after apply) 2026-02-15 02:28:07.213768 | orchestrator | + service_types = (known after apply) 2026-02-15 02:28:07.213776 | orchestrator | + tenant_id = (known after apply) 2026-02-15 02:28:07.213780 | orchestrator | 2026-02-15 02:28:07.213783 | orchestrator | + allocation_pool { 2026-02-15 02:28:07.213787 | orchestrator | + end = "192.168.31.250" 2026-02-15 02:28:07.213791 | orchestrator | + start = "192.168.31.200" 2026-02-15 02:28:07.213795 | orchestrator | } 2026-02-15 02:28:07.213799 | orchestrator | } 2026-02-15 02:28:07.213803 | orchestrator | 2026-02-15 02:28:07.213806 | orchestrator | # terraform_data.image will be created 2026-02-15 02:28:07.213810 | orchestrator | + resource "terraform_data" "image" { 2026-02-15 02:28:07.213814 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213818 | orchestrator | + input = "Ubuntu 24.04" 2026-02-15 02:28:07.213821 | orchestrator | + output = (known after apply) 2026-02-15 02:28:07.213825 | orchestrator | } 2026-02-15 02:28:07.213829 | orchestrator | 2026-02-15 02:28:07.213833 | orchestrator | # terraform_data.image_node will be created 2026-02-15 02:28:07.213837 | orchestrator | + resource "terraform_data" "image_node" { 2026-02-15 02:28:07.213841 | orchestrator | + id = (known after apply) 2026-02-15 02:28:07.213845 | orchestrator | + input = "Ubuntu 24.04" 2026-02-15 02:28:07.213848 | orchestrator | + output = (known after apply) 2026-02-15 02:28:07.213852 | orchestrator | } 2026-02-15 02:28:07.213856 | orchestrator | 2026-02-15 02:28:07.213860 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-02-15 02:28:07.213863 | orchestrator | 2026-02-15 02:28:07.213867 | orchestrator | Changes to Outputs: 2026-02-15 02:28:07.213871 | orchestrator | + manager_address = (sensitive value) 2026-02-15 02:28:07.213875 | orchestrator | + private_key = (sensitive value) 2026-02-15 02:28:07.305118 | orchestrator | terraform_data.image: Creating... 2026-02-15 02:28:07.306519 | orchestrator | terraform_data.image: Creation complete after 0s [id=95ae323e-9a49-fb10-dd87-49972599c8d5] 2026-02-15 02:28:07.436806 | orchestrator | terraform_data.image_node: Creating... 2026-02-15 02:28:07.437465 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=3a96e9c7-82ec-6710-7ec8-a8e949eef805] 2026-02-15 02:28:07.445264 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-02-15 02:28:07.445652 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-02-15 02:28:07.458489 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-02-15 02:28:07.460620 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-02-15 02:28:07.461765 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-02-15 02:28:07.461812 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-02-15 02:28:07.462898 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-02-15 02:28:07.475613 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-02-15 02:28:07.486001 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-02-15 02:28:07.495502 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-02-15 02:28:07.958409 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-15 02:28:07.963488 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-02-15 02:28:07.969930 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-02-15 02:28:07.973923 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-02-15 02:28:08.061108 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-02-15 02:28:08.068684 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-02-15 02:28:08.482901 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=f1e3a678-b1bf-4873-b748-9703eb3898d9] 2026-02-15 02:28:08.491524 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-02-15 02:28:11.100231 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=b30e735a-b22c-4e42-bb85-734d9c181b6e] 2026-02-15 02:28:11.105002 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-02-15 02:28:11.113118 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=7cc59cd1-b9bd-45a5-8870-6b105d7c74c7] 2026-02-15 02:28:11.116977 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-02-15 02:28:11.132653 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=b2a7c6af-0e01-4433-817a-01c5d828c090] 2026-02-15 02:28:11.137366 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-02-15 02:28:11.147762 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=d453eee5-ccb1-47a4-84c4-d84ad638bc71] 2026-02-15 02:28:11.153390 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=d479ce5c-4f98-42f4-9c6b-b762f9d34a57] 2026-02-15 02:28:11.168387 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-02-15 02:28:11.171045 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=bfdd46b1-6e80-4940-b9c3-db3605a460a0] 2026-02-15 02:28:11.172787 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-02-15 02:28:11.175689 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-02-15 02:28:11.220311 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=4783efc4-2c45-47ca-9463-c51e8fa27ad2] 2026-02-15 02:28:11.234879 | orchestrator | local_file.id_rsa_pub: Creating... 2026-02-15 02:28:11.240814 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=dc4575d012eadf012ecfe83c7a1046cd1c4d9670] 2026-02-15 02:28:11.244198 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=1ca6afbc-10a2-4ec5-8c49-662ac545d94f] 2026-02-15 02:28:11.248992 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-02-15 02:28:11.249216 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-02-15 02:28:11.257649 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=2bfba0a0be0b6591fbef5292cc6e36fded214e1b] 2026-02-15 02:28:11.288411 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=3b876a0f-d488-4022-9acb-dce2cb7c3b58] 2026-02-15 02:28:11.856564 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=6cdab0dd-845d-4482-b01f-950374c91f45] 2026-02-15 02:28:12.107613 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=c850582b-474c-4041-88a3-9dd3aa08070a] 2026-02-15 02:28:12.114300 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-02-15 02:28:14.478393 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=37951a5f-9a29-4d71-b98b-e7992be6d9db] 2026-02-15 02:28:14.529461 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=47bb0aa1-854d-4042-a0dd-8afa6c7f18e0] 2026-02-15 02:28:14.539101 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd] 2026-02-15 02:28:14.588641 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=7713f0f4-7c56-4d74-9f60-9875e1b6d006] 2026-02-15 02:28:14.627978 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=f6c6941f-d825-4354-824d-63e95e31c47e] 2026-02-15 02:28:14.662644 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=1976e1cf-6346-4412-9b3b-15c43c691264] 2026-02-15 02:28:15.263770 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=6340e095-69f1-4684-a034-06fc5bfbef04] 2026-02-15 02:28:15.270380 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-02-15 02:28:15.275075 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-02-15 02:28:15.276924 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-02-15 02:28:15.448141 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=9046e425-4178-458c-92a7-a7d9ca43f406] 2026-02-15 02:28:15.453520 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-02-15 02:28:15.454102 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-02-15 02:28:15.457181 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-02-15 02:28:15.457385 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-02-15 02:28:15.457585 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-02-15 02:28:15.465479 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-02-15 02:28:15.479868 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=ca7fc0a0-e205-4e7c-8294-98860d470cd4] 2026-02-15 02:28:15.487044 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-02-15 02:28:15.487135 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-02-15 02:28:15.495139 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-02-15 02:28:15.626106 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=77635b14-efda-4788-ba4a-894d80d4db80] 2026-02-15 02:28:15.632438 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-02-15 02:28:15.659868 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=e778a8db-9c54-4eab-a08b-ca32429cd921] 2026-02-15 02:28:15.666182 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-02-15 02:28:15.776993 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=4b760a34-e4df-4cb4-9326-7dbcbd7c6831] 2026-02-15 02:28:15.784531 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-02-15 02:28:15.925967 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=ff75d3b8-3562-4a06-93c1-1e3b98870aff] 2026-02-15 02:28:15.935123 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-02-15 02:28:16.069218 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=f516548a-8c6f-4131-81a5-35a86d857631] 2026-02-15 02:28:16.073640 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-02-15 02:28:16.106723 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=177f3ce5-2668-4ed1-84b4-f39bbcecbe9f] 2026-02-15 02:28:16.110928 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-02-15 02:28:16.241978 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=683436df-e816-4eee-a1ac-222ebb9f0339] 2026-02-15 02:28:16.246142 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-02-15 02:28:16.249567 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=49244c60-68a2-4036-af30-d370805471f0] 2026-02-15 02:28:16.329145 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=68f59e5e-1c55-4aa2-a3d7-5f91cd60fad3] 2026-02-15 02:28:16.362100 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 0s [id=06e16ad1-87be-40cf-9670-fc00713741d2] 2026-02-15 02:28:16.417765 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 0s [id=29eb1ea8-030f-4327-83ea-991a9f00817f] 2026-02-15 02:28:16.419159 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=f2b707a7-a908-4a59-b387-72bb36fa5491] 2026-02-15 02:28:16.485455 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=6f2205a6-5fc5-44bf-bcab-983266a12c70] 2026-02-15 02:28:16.504816 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=71d7ec8a-009f-4f2d-b5dd-fb578ce37fd0] 2026-02-15 02:28:16.538249 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=06b258d1-5898-4ef0-abe0-f0a023fda691] 2026-02-15 02:28:16.571035 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=e4cc61bc-b680-43fb-8534-0ff28b039f8c] 2026-02-15 02:28:17.502996 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=6be17c69-6191-46b5-a89a-b880a14ded6a] 2026-02-15 02:28:17.540091 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-02-15 02:28:17.542188 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-02-15 02:28:17.542634 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-02-15 02:28:17.555130 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-02-15 02:28:17.565763 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-02-15 02:28:17.566089 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-02-15 02:28:17.572285 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-02-15 02:28:18.830950 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=211afe8b-ea3a-4a99-a3d4-9420ae86b077] 2026-02-15 02:28:18.840076 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-02-15 02:28:18.840139 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-02-15 02:28:18.841203 | orchestrator | local_file.inventory: Creating... 2026-02-15 02:28:18.844684 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=5575426ebc3eff145ae4d340dc9b0148f0ad9c1c] 2026-02-15 02:28:18.845817 | orchestrator | local_file.inventory: Creation complete after 0s [id=10a230598d5ee7d56534a25a29e6bc9b0722559c] 2026-02-15 02:28:20.691068 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=211afe8b-ea3a-4a99-a3d4-9420ae86b077] 2026-02-15 02:28:27.542209 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-02-15 02:28:27.543368 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-02-15 02:28:27.556693 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-02-15 02:28:27.567117 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-02-15 02:28:27.567199 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-02-15 02:28:27.575630 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-02-15 02:28:37.551906 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-02-15 02:28:37.552002 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-02-15 02:28:37.557417 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-02-15 02:28:37.567907 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-02-15 02:28:37.568010 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-02-15 02:28:37.576337 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-02-15 02:28:38.011066 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=4a0769ef-5371-4df1-a5ac-18f8f7185015] 2026-02-15 02:28:38.064499 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=609b8860-69de-4e9d-8d17-01b4657a1c85] 2026-02-15 02:28:38.176418 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=a7346d88-c9bb-4349-90fc-226a0a995aaf] 2026-02-15 02:28:47.558408 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-02-15 02:28:47.558496 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-02-15 02:28:47.568736 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-02-15 02:28:48.270421 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=013ed6b6-17ca-412a-a772-cefb12863301] 2026-02-15 02:28:48.322551 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=0a7e3698-dcd0-40a2-87b2-e07c074bc861] 2026-02-15 02:28:48.437554 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=45494dd3-3877-4ff4-b5af-4f9f29996b72] 2026-02-15 02:28:48.452188 | orchestrator | null_resource.node_semaphore: Creating... 2026-02-15 02:28:48.469949 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=1194505517057352210] 2026-02-15 02:28:48.483223 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-02-15 02:28:48.484679 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-02-15 02:28:48.490202 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-02-15 02:28:48.490847 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-02-15 02:28:48.491160 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-02-15 02:28:48.492432 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-02-15 02:28:48.776980 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-02-15 02:28:48.802708 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-02-15 02:28:48.810746 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-02-15 02:28:48.820387 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-02-15 02:28:52.144773 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=4a0769ef-5371-4df1-a5ac-18f8f7185015/b30e735a-b22c-4e42-bb85-734d9c181b6e] 2026-02-15 02:28:52.165079 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=4a0769ef-5371-4df1-a5ac-18f8f7185015/d453eee5-ccb1-47a4-84c4-d84ad638bc71] 2026-02-15 02:28:52.172771 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 4s [id=013ed6b6-17ca-412a-a772-cefb12863301/1ca6afbc-10a2-4ec5-8c49-662ac545d94f] 2026-02-15 02:28:52.183259 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=013ed6b6-17ca-412a-a772-cefb12863301/3b876a0f-d488-4022-9acb-dce2cb7c3b58] 2026-02-15 02:28:52.196969 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=a7346d88-c9bb-4349-90fc-226a0a995aaf/7cc59cd1-b9bd-45a5-8870-6b105d7c74c7] 2026-02-15 02:28:52.239923 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=a7346d88-c9bb-4349-90fc-226a0a995aaf/bfdd46b1-6e80-4940-b9c3-db3605a460a0] 2026-02-15 02:28:52.470962 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=013ed6b6-17ca-412a-a772-cefb12863301/4783efc4-2c45-47ca-9463-c51e8fa27ad2] 2026-02-15 02:28:58.248989 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=4a0769ef-5371-4df1-a5ac-18f8f7185015/b2a7c6af-0e01-4433-817a-01c5d828c090] 2026-02-15 02:28:58.325394 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=a7346d88-c9bb-4349-90fc-226a0a995aaf/d479ce5c-4f98-42f4-9c6b-b762f9d34a57] 2026-02-15 02:28:58.821997 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-02-15 02:29:08.822393 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-02-15 02:29:09.202856 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=e68e659c-f2ea-48cc-8b10-0d1f3f2e0d6e] 2026-02-15 02:29:09.216638 | orchestrator | 2026-02-15 02:29:09.216727 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-02-15 02:29:09.216748 | orchestrator | 2026-02-15 02:29:09.216760 | orchestrator | Outputs: 2026-02-15 02:29:09.216773 | orchestrator | 2026-02-15 02:29:09.216785 | orchestrator | manager_address = 2026-02-15 02:29:09.216797 | orchestrator | private_key = 2026-02-15 02:29:09.559049 | orchestrator | ok: Runtime: 0:01:08.287568 2026-02-15 02:29:09.595148 | 2026-02-15 02:29:09.595303 | TASK [Fetch manager address] 2026-02-15 02:29:10.080899 | orchestrator | ok 2026-02-15 02:29:10.091926 | 2026-02-15 02:29:10.092092 | TASK [Set manager_host address] 2026-02-15 02:29:10.172276 | orchestrator | ok 2026-02-15 02:29:10.181435 | 2026-02-15 02:29:10.181562 | LOOP [Update ansible collections] 2026-02-15 02:29:12.070754 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-15 02:29:12.071198 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-15 02:29:12.071255 | orchestrator | Starting galaxy collection install process 2026-02-15 02:29:12.071292 | orchestrator | Process install dependency map 2026-02-15 02:29:12.071323 | orchestrator | Starting collection install process 2026-02-15 02:29:12.071352 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-02-15 02:29:12.071387 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-02-15 02:29:12.071423 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-02-15 02:29:12.071487 | orchestrator | ok: Item: commons Runtime: 0:00:01.552073 2026-02-15 02:29:13.003562 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-15 02:29:13.003786 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-02-15 02:29:13.003856 | orchestrator | Starting galaxy collection install process 2026-02-15 02:29:13.003910 | orchestrator | Process install dependency map 2026-02-15 02:29:13.003958 | orchestrator | Starting collection install process 2026-02-15 02:29:13.004002 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-02-15 02:29:13.004046 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-02-15 02:29:13.004105 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-15 02:29:13.004170 | orchestrator | ok: Item: services Runtime: 0:00:00.641097 2026-02-15 02:29:13.030575 | 2026-02-15 02:29:13.030734 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-15 02:29:23.587049 | orchestrator | ok 2026-02-15 02:29:23.597355 | 2026-02-15 02:29:23.597472 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-15 02:30:23.646984 | orchestrator | ok 2026-02-15 02:30:23.656961 | 2026-02-15 02:30:23.657104 | TASK [Fetch manager ssh hostkey] 2026-02-15 02:30:25.235574 | orchestrator | Output suppressed because no_log was given 2026-02-15 02:30:25.246033 | 2026-02-15 02:30:25.246250 | TASK [Get ssh keypair from terraform environment] 2026-02-15 02:30:25.781998 | orchestrator | ok: Runtime: 0:00:00.011166 2026-02-15 02:30:25.799953 | 2026-02-15 02:30:25.800151 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-15 02:30:25.841102 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-02-15 02:30:25.851244 | 2026-02-15 02:30:25.851392 | TASK [Run manager part 0] 2026-02-15 02:30:27.065757 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-15 02:30:27.220802 | orchestrator | 2026-02-15 02:30:27.220867 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-02-15 02:30:27.220881 | orchestrator | 2026-02-15 02:30:27.220906 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-02-15 02:30:29.211327 | orchestrator | ok: [testbed-manager] 2026-02-15 02:30:29.211390 | orchestrator | 2026-02-15 02:30:29.211428 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-15 02:30:29.211443 | orchestrator | 2026-02-15 02:30:29.211458 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 02:30:31.234075 | orchestrator | ok: [testbed-manager] 2026-02-15 02:30:31.234166 | orchestrator | 2026-02-15 02:30:31.234193 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-15 02:30:31.921079 | orchestrator | ok: [testbed-manager] 2026-02-15 02:30:31.921135 | orchestrator | 2026-02-15 02:30:31.921147 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-15 02:30:31.973694 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:30:31.973761 | orchestrator | 2026-02-15 02:30:31.973780 | orchestrator | TASK [Update package cache] **************************************************** 2026-02-15 02:30:32.017892 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:30:32.017954 | orchestrator | 2026-02-15 02:30:32.017967 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-15 02:30:32.047189 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:30:32.047235 | orchestrator | 2026-02-15 02:30:32.047242 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-15 02:30:32.077091 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:30:32.077162 | orchestrator | 2026-02-15 02:30:32.077172 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-15 02:30:32.215110 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:30:32.215172 | orchestrator | 2026-02-15 02:30:32.215188 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-02-15 02:30:32.259671 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:30:32.259715 | orchestrator | 2026-02-15 02:30:32.259725 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-02-15 02:30:32.301246 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:30:32.301293 | orchestrator | 2026-02-15 02:30:32.301305 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-02-15 02:30:33.125376 | orchestrator | changed: [testbed-manager] 2026-02-15 02:30:33.125412 | orchestrator | 2026-02-15 02:30:33.125418 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-02-15 02:33:28.683358 | orchestrator | changed: [testbed-manager] 2026-02-15 02:33:28.683418 | orchestrator | 2026-02-15 02:33:28.683430 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-15 02:35:02.415377 | orchestrator | changed: [testbed-manager] 2026-02-15 02:35:02.415468 | orchestrator | 2026-02-15 02:35:02.415485 | orchestrator | TASK [Install required packages] *********************************************** 2026-02-15 02:35:28.672276 | orchestrator | changed: [testbed-manager] 2026-02-15 02:35:28.672352 | orchestrator | 2026-02-15 02:35:28.672362 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-02-15 02:35:38.849853 | orchestrator | changed: [testbed-manager] 2026-02-15 02:35:38.849899 | orchestrator | 2026-02-15 02:35:38.849907 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-15 02:35:38.899913 | orchestrator | ok: [testbed-manager] 2026-02-15 02:35:38.899960 | orchestrator | 2026-02-15 02:35:38.899971 | orchestrator | TASK [Get current user] ******************************************************** 2026-02-15 02:35:39.814350 | orchestrator | ok: [testbed-manager] 2026-02-15 02:35:39.815078 | orchestrator | 2026-02-15 02:35:39.815093 | orchestrator | TASK [Create venv directory] *************************************************** 2026-02-15 02:35:40.571115 | orchestrator | changed: [testbed-manager] 2026-02-15 02:35:40.571185 | orchestrator | 2026-02-15 02:35:40.571195 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-02-15 02:35:47.584707 | orchestrator | changed: [testbed-manager] 2026-02-15 02:35:47.584809 | orchestrator | 2026-02-15 02:35:47.584861 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-02-15 02:35:54.086288 | orchestrator | changed: [testbed-manager] 2026-02-15 02:35:54.086386 | orchestrator | 2026-02-15 02:35:54.086406 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-02-15 02:35:57.019664 | orchestrator | changed: [testbed-manager] 2026-02-15 02:35:57.019828 | orchestrator | 2026-02-15 02:35:57.019848 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-02-15 02:35:58.911252 | orchestrator | changed: [testbed-manager] 2026-02-15 02:35:58.911320 | orchestrator | 2026-02-15 02:35:58.911329 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-02-15 02:36:00.011051 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-15 02:36:00.011114 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-15 02:36:00.011122 | orchestrator | 2026-02-15 02:36:00.011128 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-02-15 02:36:00.050401 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-15 02:36:00.050451 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-15 02:36:00.050457 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-15 02:36:00.050462 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-15 02:36:05.686982 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-02-15 02:36:05.687033 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-02-15 02:36:05.687041 | orchestrator | 2026-02-15 02:36:05.687048 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-02-15 02:36:06.278347 | orchestrator | changed: [testbed-manager] 2026-02-15 02:36:06.278393 | orchestrator | 2026-02-15 02:36:06.278402 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-02-15 02:39:25.564304 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-02-15 02:39:25.564450 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-02-15 02:39:25.564483 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-02-15 02:39:25.564505 | orchestrator | 2026-02-15 02:39:25.564526 | orchestrator | TASK [Install local collections] *********************************************** 2026-02-15 02:39:28.078709 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-02-15 02:39:28.078748 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-02-15 02:39:28.078753 | orchestrator | 2026-02-15 02:39:28.078758 | orchestrator | PLAY [Create operator user] **************************************************** 2026-02-15 02:39:28.078763 | orchestrator | 2026-02-15 02:39:28.078767 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 02:39:29.562603 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:29.562638 | orchestrator | 2026-02-15 02:39:29.562645 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-15 02:39:29.604577 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:29.604610 | orchestrator | 2026-02-15 02:39:29.604616 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-15 02:39:29.678356 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:29.678396 | orchestrator | 2026-02-15 02:39:29.678404 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-15 02:39:30.500770 | orchestrator | changed: [testbed-manager] 2026-02-15 02:39:30.500806 | orchestrator | 2026-02-15 02:39:30.500812 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-15 02:39:31.239503 | orchestrator | changed: [testbed-manager] 2026-02-15 02:39:31.239572 | orchestrator | 2026-02-15 02:39:31.239581 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-15 02:39:32.638378 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-02-15 02:39:32.638475 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-02-15 02:39:32.638491 | orchestrator | 2026-02-15 02:39:32.638519 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-15 02:39:34.132991 | orchestrator | changed: [testbed-manager] 2026-02-15 02:39:34.133164 | orchestrator | 2026-02-15 02:39:34.133191 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-15 02:39:35.971996 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-02-15 02:39:35.972078 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-02-15 02:39:35.972089 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-02-15 02:39:35.972097 | orchestrator | 2026-02-15 02:39:35.972105 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-15 02:39:36.022892 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:39:36.023012 | orchestrator | 2026-02-15 02:39:36.023029 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-15 02:39:36.096458 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:39:36.096568 | orchestrator | 2026-02-15 02:39:36.096597 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-15 02:39:36.716343 | orchestrator | changed: [testbed-manager] 2026-02-15 02:39:36.716388 | orchestrator | 2026-02-15 02:39:36.716396 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-15 02:39:36.785444 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:39:36.785481 | orchestrator | 2026-02-15 02:39:36.785487 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-15 02:39:37.687581 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-15 02:39:37.687621 | orchestrator | changed: [testbed-manager] 2026-02-15 02:39:37.687629 | orchestrator | 2026-02-15 02:39:37.687635 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-15 02:39:37.721518 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:39:37.721558 | orchestrator | 2026-02-15 02:39:37.721566 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-15 02:39:37.761033 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:39:37.761077 | orchestrator | 2026-02-15 02:39:37.761086 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-15 02:39:37.792139 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:39:37.792179 | orchestrator | 2026-02-15 02:39:37.792190 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-15 02:39:37.858315 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:39:37.858371 | orchestrator | 2026-02-15 02:39:37.858384 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-15 02:39:38.576095 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:38.576129 | orchestrator | 2026-02-15 02:39:38.576135 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-02-15 02:39:38.576140 | orchestrator | 2026-02-15 02:39:38.576144 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 02:39:40.203180 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:40.203222 | orchestrator | 2026-02-15 02:39:40.203228 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-02-15 02:39:41.259230 | orchestrator | changed: [testbed-manager] 2026-02-15 02:39:41.259321 | orchestrator | 2026-02-15 02:39:41.259347 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 02:39:41.259370 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-02-15 02:39:41.259382 | orchestrator | 2026-02-15 02:39:41.482335 | orchestrator | ok: Runtime: 0:09:15.218981 2026-02-15 02:39:41.500858 | 2026-02-15 02:39:41.500998 | TASK [Point out that the log in on the manager is now possible] 2026-02-15 02:39:41.550395 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-02-15 02:39:41.560051 | 2026-02-15 02:39:41.560168 | TASK [Point out that the following task takes some time and does not give any output] 2026-02-15 02:39:41.608648 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-02-15 02:39:41.618630 | 2026-02-15 02:39:41.618757 | TASK [Run manager part 1 + 2] 2026-02-15 02:39:42.513324 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-02-15 02:39:42.574273 | orchestrator | 2026-02-15 02:39:42.574325 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-02-15 02:39:42.574332 | orchestrator | 2026-02-15 02:39:42.574345 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 02:39:45.595541 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:45.595900 | orchestrator | 2026-02-15 02:39:45.595930 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-02-15 02:39:45.628363 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:39:45.628419 | orchestrator | 2026-02-15 02:39:45.628430 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-02-15 02:39:45.667833 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:45.667902 | orchestrator | 2026-02-15 02:39:45.667915 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-15 02:39:45.713498 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:45.713551 | orchestrator | 2026-02-15 02:39:45.713560 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-15 02:39:45.796905 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:45.796976 | orchestrator | 2026-02-15 02:39:45.796986 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-15 02:39:45.858840 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:45.858895 | orchestrator | 2026-02-15 02:39:45.858907 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-15 02:39:45.902227 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-02-15 02:39:45.902273 | orchestrator | 2026-02-15 02:39:45.902280 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-15 02:39:46.623406 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:46.623484 | orchestrator | 2026-02-15 02:39:46.623495 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-15 02:39:46.667614 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:39:46.667670 | orchestrator | 2026-02-15 02:39:46.667680 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-15 02:39:48.213245 | orchestrator | changed: [testbed-manager] 2026-02-15 02:39:48.213315 | orchestrator | 2026-02-15 02:39:48.213326 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-15 02:39:48.891087 | orchestrator | ok: [testbed-manager] 2026-02-15 02:39:48.891146 | orchestrator | 2026-02-15 02:39:48.891156 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-15 02:39:50.102131 | orchestrator | changed: [testbed-manager] 2026-02-15 02:39:50.102190 | orchestrator | 2026-02-15 02:39:50.102201 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-15 02:40:06.616251 | orchestrator | changed: [testbed-manager] 2026-02-15 02:40:06.616329 | orchestrator | 2026-02-15 02:40:06.616348 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-02-15 02:40:07.343276 | orchestrator | ok: [testbed-manager] 2026-02-15 02:40:07.343376 | orchestrator | 2026-02-15 02:40:07.343397 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-02-15 02:40:07.424657 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:40:07.424757 | orchestrator | 2026-02-15 02:40:07.424774 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-02-15 02:40:08.460310 | orchestrator | changed: [testbed-manager] 2026-02-15 02:40:08.460385 | orchestrator | 2026-02-15 02:40:08.460392 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-02-15 02:40:09.475484 | orchestrator | changed: [testbed-manager] 2026-02-15 02:40:09.475602 | orchestrator | 2026-02-15 02:40:09.475630 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-02-15 02:40:10.050313 | orchestrator | changed: [testbed-manager] 2026-02-15 02:40:10.050389 | orchestrator | 2026-02-15 02:40:10.050398 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-02-15 02:40:10.097164 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-02-15 02:40:10.097233 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-02-15 02:40:10.097239 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-02-15 02:40:10.097244 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-02-15 02:40:12.862301 | orchestrator | changed: [testbed-manager] 2026-02-15 02:40:12.862405 | orchestrator | 2026-02-15 02:40:12.862421 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-02-15 02:40:22.549323 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-02-15 02:40:22.549442 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-02-15 02:40:22.549470 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-02-15 02:40:22.549491 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-02-15 02:40:22.549523 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-02-15 02:40:22.549542 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-02-15 02:40:22.549562 | orchestrator | 2026-02-15 02:40:22.549582 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-02-15 02:40:23.666236 | orchestrator | changed: [testbed-manager] 2026-02-15 02:40:23.666314 | orchestrator | 2026-02-15 02:40:23.666325 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-02-15 02:40:23.712439 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:40:23.712537 | orchestrator | 2026-02-15 02:40:23.712554 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-02-15 02:40:27.089765 | orchestrator | changed: [testbed-manager] 2026-02-15 02:40:27.090635 | orchestrator | 2026-02-15 02:40:27.090657 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-02-15 02:40:27.127771 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:40:27.127861 | orchestrator | 2026-02-15 02:40:27.127876 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-02-15 02:42:18.879145 | orchestrator | changed: [testbed-manager] 2026-02-15 02:42:18.879203 | orchestrator | 2026-02-15 02:42:18.879210 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-15 02:42:20.150572 | orchestrator | ok: [testbed-manager] 2026-02-15 02:42:20.151506 | orchestrator | 2026-02-15 02:42:20.151549 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 02:42:20.151558 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-02-15 02:42:20.151565 | orchestrator | 2026-02-15 02:42:20.766962 | orchestrator | ok: Runtime: 0:02:38.354479 2026-02-15 02:42:20.785049 | 2026-02-15 02:42:20.785218 | TASK [Reboot manager] 2026-02-15 02:42:22.323602 | orchestrator | ok: Runtime: 0:00:00.998456 2026-02-15 02:42:22.338956 | 2026-02-15 02:42:22.339109 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-02-15 02:42:39.287171 | orchestrator | ok 2026-02-15 02:42:39.297040 | 2026-02-15 02:42:39.297165 | TASK [Wait a little longer for the manager so that everything is ready] 2026-02-15 02:43:39.350676 | orchestrator | ok 2026-02-15 02:43:39.361295 | 2026-02-15 02:43:39.361440 | TASK [Deploy manager + bootstrap nodes] 2026-02-15 02:43:42.149402 | orchestrator | 2026-02-15 02:43:42.149568 | orchestrator | # DEPLOY MANAGER 2026-02-15 02:43:42.149586 | orchestrator | 2026-02-15 02:43:42.149597 | orchestrator | + set -e 2026-02-15 02:43:42.149607 | orchestrator | + echo 2026-02-15 02:43:42.149617 | orchestrator | + echo '# DEPLOY MANAGER' 2026-02-15 02:43:42.149630 | orchestrator | + echo 2026-02-15 02:43:42.149668 | orchestrator | + cat /opt/manager-vars.sh 2026-02-15 02:43:42.152511 | orchestrator | export NUMBER_OF_NODES=6 2026-02-15 02:43:42.152612 | orchestrator | 2026-02-15 02:43:42.152631 | orchestrator | export CEPH_VERSION=reef 2026-02-15 02:43:42.152645 | orchestrator | export CONFIGURATION_VERSION=main 2026-02-15 02:43:42.152660 | orchestrator | export MANAGER_VERSION=9.5.0 2026-02-15 02:43:42.152690 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-02-15 02:43:42.152703 | orchestrator | 2026-02-15 02:43:42.152723 | orchestrator | export ARA=false 2026-02-15 02:43:42.152736 | orchestrator | export DEPLOY_MODE=manager 2026-02-15 02:43:42.152756 | orchestrator | export TEMPEST=false 2026-02-15 02:43:42.152771 | orchestrator | export IS_ZUUL=true 2026-02-15 02:43:42.152784 | orchestrator | 2026-02-15 02:43:42.152803 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 02:43:42.152817 | orchestrator | export EXTERNAL_API=false 2026-02-15 02:43:42.152830 | orchestrator | 2026-02-15 02:43:42.152844 | orchestrator | export IMAGE_USER=ubuntu 2026-02-15 02:43:42.152862 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-02-15 02:43:42.152876 | orchestrator | 2026-02-15 02:43:42.152890 | orchestrator | export CEPH_STACK=ceph-ansible 2026-02-15 02:43:42.152903 | orchestrator | 2026-02-15 02:43:42.152915 | orchestrator | + echo 2026-02-15 02:43:42.152931 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 02:43:42.153950 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 02:43:42.153977 | orchestrator | ++ INTERACTIVE=false 2026-02-15 02:43:42.153993 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 02:43:42.154005 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 02:43:42.154051 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 02:43:42.154060 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 02:43:42.154068 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 02:43:42.154076 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 02:43:42.154083 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 02:43:42.154091 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 02:43:42.154099 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 02:43:42.154107 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 02:43:42.154115 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 02:43:42.154123 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 02:43:42.154142 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 02:43:42.154167 | orchestrator | ++ export ARA=false 2026-02-15 02:43:42.154176 | orchestrator | ++ ARA=false 2026-02-15 02:43:42.154184 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 02:43:42.154191 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 02:43:42.154205 | orchestrator | ++ export TEMPEST=false 2026-02-15 02:43:42.154213 | orchestrator | ++ TEMPEST=false 2026-02-15 02:43:42.154221 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 02:43:42.154229 | orchestrator | ++ IS_ZUUL=true 2026-02-15 02:43:42.154237 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 02:43:42.154245 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 02:43:42.154252 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 02:43:42.154260 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 02:43:42.154267 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 02:43:42.154275 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 02:43:42.154283 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 02:43:42.154291 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 02:43:42.154299 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 02:43:42.154306 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 02:43:42.154314 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-02-15 02:43:42.204701 | orchestrator | + docker version 2026-02-15 02:43:42.305029 | orchestrator | Client: Docker Engine - Community 2026-02-15 02:43:42.305124 | orchestrator | Version: 27.5.1 2026-02-15 02:43:42.305144 | orchestrator | API version: 1.47 2026-02-15 02:43:42.305206 | orchestrator | Go version: go1.22.11 2026-02-15 02:43:42.305220 | orchestrator | Git commit: 9f9e405 2026-02-15 02:43:42.305234 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-15 02:43:42.305249 | orchestrator | OS/Arch: linux/amd64 2026-02-15 02:43:42.305265 | orchestrator | Context: default 2026-02-15 02:43:42.305278 | orchestrator | 2026-02-15 02:43:42.305292 | orchestrator | Server: Docker Engine - Community 2026-02-15 02:43:42.305303 | orchestrator | Engine: 2026-02-15 02:43:42.305313 | orchestrator | Version: 27.5.1 2026-02-15 02:43:42.305322 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-02-15 02:43:42.305358 | orchestrator | Go version: go1.22.11 2026-02-15 02:43:42.305367 | orchestrator | Git commit: 4c9b3b0 2026-02-15 02:43:42.305375 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-02-15 02:43:42.305383 | orchestrator | OS/Arch: linux/amd64 2026-02-15 02:43:42.305390 | orchestrator | Experimental: false 2026-02-15 02:43:42.305398 | orchestrator | containerd: 2026-02-15 02:43:42.305407 | orchestrator | Version: v2.2.1 2026-02-15 02:43:42.305415 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-02-15 02:43:42.305423 | orchestrator | runc: 2026-02-15 02:43:42.305431 | orchestrator | Version: 1.3.4 2026-02-15 02:43:42.305439 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-02-15 02:43:42.305447 | orchestrator | docker-init: 2026-02-15 02:43:42.305455 | orchestrator | Version: 0.19.0 2026-02-15 02:43:42.305463 | orchestrator | GitCommit: de40ad0 2026-02-15 02:43:42.308743 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-02-15 02:43:42.317504 | orchestrator | + set -e 2026-02-15 02:43:42.317588 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 02:43:42.317607 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 02:43:42.317622 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 02:43:42.317637 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 02:43:42.317652 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 02:43:42.317666 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 02:43:42.317682 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 02:43:42.317697 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 02:43:42.317712 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 02:43:42.317727 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 02:43:42.317742 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 02:43:42.317757 | orchestrator | ++ export ARA=false 2026-02-15 02:43:42.317773 | orchestrator | ++ ARA=false 2026-02-15 02:43:42.317788 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 02:43:42.317803 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 02:43:42.317817 | orchestrator | ++ export TEMPEST=false 2026-02-15 02:43:42.317833 | orchestrator | ++ TEMPEST=false 2026-02-15 02:43:42.317849 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 02:43:42.317864 | orchestrator | ++ IS_ZUUL=true 2026-02-15 02:43:42.317879 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 02:43:42.317896 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 02:43:42.317911 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 02:43:42.317926 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 02:43:42.317942 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 02:43:42.317956 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 02:43:42.317972 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 02:43:42.317987 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 02:43:42.318003 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 02:43:42.318077 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 02:43:42.318094 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 02:43:42.318109 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 02:43:42.318125 | orchestrator | ++ INTERACTIVE=false 2026-02-15 02:43:42.318139 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 02:43:42.318197 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 02:43:42.318213 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-15 02:43:42.318228 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.5.0 2026-02-15 02:43:42.324636 | orchestrator | + set -e 2026-02-15 02:43:42.324727 | orchestrator | + VERSION=9.5.0 2026-02-15 02:43:42.324745 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.5.0/g' /opt/configuration/environments/manager/configuration.yml 2026-02-15 02:43:42.332674 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-15 02:43:42.332740 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-15 02:43:42.338239 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-15 02:43:42.342398 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-15 02:43:42.350289 | orchestrator | /opt/configuration ~ 2026-02-15 02:43:42.350357 | orchestrator | + set -e 2026-02-15 02:43:42.350370 | orchestrator | + pushd /opt/configuration 2026-02-15 02:43:42.350381 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-15 02:43:42.352059 | orchestrator | + source /opt/venv/bin/activate 2026-02-15 02:43:42.353039 | orchestrator | ++ deactivate nondestructive 2026-02-15 02:43:42.353086 | orchestrator | ++ '[' -n '' ']' 2026-02-15 02:43:42.353103 | orchestrator | ++ '[' -n '' ']' 2026-02-15 02:43:42.353142 | orchestrator | ++ hash -r 2026-02-15 02:43:42.353190 | orchestrator | ++ '[' -n '' ']' 2026-02-15 02:43:42.353202 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-15 02:43:42.353213 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-15 02:43:42.353224 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-15 02:43:42.353375 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-15 02:43:42.353391 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-15 02:43:42.353402 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-15 02:43:42.353413 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-15 02:43:42.353424 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 02:43:42.353436 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 02:43:42.353447 | orchestrator | ++ export PATH 2026-02-15 02:43:42.353458 | orchestrator | ++ '[' -n '' ']' 2026-02-15 02:43:42.353469 | orchestrator | ++ '[' -z '' ']' 2026-02-15 02:43:42.353479 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-15 02:43:42.353490 | orchestrator | ++ PS1='(venv) ' 2026-02-15 02:43:42.353501 | orchestrator | ++ export PS1 2026-02-15 02:43:42.353511 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-15 02:43:42.353522 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-15 02:43:42.353535 | orchestrator | ++ hash -r 2026-02-15 02:43:42.353555 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-15 02:43:43.770851 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-15 02:43:43.773024 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-15 02:43:43.791722 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-15 02:43:43.791815 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-15 02:43:43.791828 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-15 02:43:43.792219 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-15 02:43:43.794096 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-15 02:43:43.795303 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-15 02:43:43.796939 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-15 02:43:43.831094 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-15 02:43:43.832795 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-15 02:43:43.834933 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-15 02:43:43.836541 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-15 02:43:43.840641 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-15 02:43:44.062349 | orchestrator | ++ which gilt 2026-02-15 02:43:44.066129 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-15 02:43:44.066235 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-15 02:43:44.370777 | orchestrator | osism.cfg-generics: 2026-02-15 02:43:44.536020 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-15 02:43:44.536123 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-15 02:43:44.536646 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-15 02:43:44.536705 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-15 02:43:45.373088 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-15 02:43:45.386361 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-15 02:43:45.717904 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-15 02:43:45.780416 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-15 02:43:45.780524 | orchestrator | + deactivate 2026-02-15 02:43:45.780540 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-15 02:43:45.780554 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 02:43:45.780565 | orchestrator | + export PATH 2026-02-15 02:43:45.780576 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-15 02:43:45.780588 | orchestrator | + '[' -n '' ']' 2026-02-15 02:43:45.780601 | orchestrator | + hash -r 2026-02-15 02:43:45.780612 | orchestrator | + '[' -n '' ']' 2026-02-15 02:43:45.780622 | orchestrator | + unset VIRTUAL_ENV 2026-02-15 02:43:45.780633 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-15 02:43:45.780644 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-15 02:43:45.780654 | orchestrator | + unset -f deactivate 2026-02-15 02:43:45.780665 | orchestrator | + popd 2026-02-15 02:43:45.780676 | orchestrator | ~ 2026-02-15 02:43:45.782837 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-15 02:43:45.782895 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-02-15 02:43:45.783857 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-15 02:43:45.848254 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-15 02:43:45.848351 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-02-15 02:43:45.849394 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-15 02:43:45.912084 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-15 02:43:45.913292 | orchestrator | ++ semver 2024.2 2025.1 2026-02-15 02:43:45.981344 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-15 02:43:45.981451 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-02-15 02:43:46.090635 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-15 02:43:46.090741 | orchestrator | + source /opt/venv/bin/activate 2026-02-15 02:43:46.090757 | orchestrator | ++ deactivate nondestructive 2026-02-15 02:43:46.090769 | orchestrator | ++ '[' -n '' ']' 2026-02-15 02:43:46.090780 | orchestrator | ++ '[' -n '' ']' 2026-02-15 02:43:46.090791 | orchestrator | ++ hash -r 2026-02-15 02:43:46.090815 | orchestrator | ++ '[' -n '' ']' 2026-02-15 02:43:46.090826 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-15 02:43:46.090837 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-15 02:43:46.090848 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-15 02:43:46.091193 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-15 02:43:46.091283 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-15 02:43:46.091299 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-15 02:43:46.091312 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-15 02:43:46.091322 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 02:43:46.091352 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 02:43:46.091366 | orchestrator | ++ export PATH 2026-02-15 02:43:46.091377 | orchestrator | ++ '[' -n '' ']' 2026-02-15 02:43:46.091434 | orchestrator | ++ '[' -z '' ']' 2026-02-15 02:43:46.091447 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-15 02:43:46.091456 | orchestrator | ++ PS1='(venv) ' 2026-02-15 02:43:46.091462 | orchestrator | ++ export PS1 2026-02-15 02:43:46.091469 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-15 02:43:46.091475 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-15 02:43:46.091484 | orchestrator | ++ hash -r 2026-02-15 02:43:46.091661 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-02-15 02:43:47.476554 | orchestrator | 2026-02-15 02:43:47.476666 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-02-15 02:43:47.476684 | orchestrator | 2026-02-15 02:43:47.476697 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-15 02:43:48.171688 | orchestrator | ok: [testbed-manager] 2026-02-15 02:43:48.171788 | orchestrator | 2026-02-15 02:43:48.171798 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-15 02:43:49.248531 | orchestrator | changed: [testbed-manager] 2026-02-15 02:43:49.248660 | orchestrator | 2026-02-15 02:43:49.248716 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-02-15 02:43:49.248747 | orchestrator | 2026-02-15 02:43:49.248755 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 02:43:51.718252 | orchestrator | ok: [testbed-manager] 2026-02-15 02:43:51.718359 | orchestrator | 2026-02-15 02:43:51.718375 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-02-15 02:43:51.766730 | orchestrator | ok: [testbed-manager] 2026-02-15 02:43:51.766802 | orchestrator | 2026-02-15 02:43:51.766808 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-02-15 02:43:52.397370 | orchestrator | changed: [testbed-manager] 2026-02-15 02:43:52.397446 | orchestrator | 2026-02-15 02:43:52.397457 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-02-15 02:43:52.445410 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:43:52.445519 | orchestrator | 2026-02-15 02:43:52.445537 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-02-15 02:43:52.817729 | orchestrator | changed: [testbed-manager] 2026-02-15 02:43:52.817856 | orchestrator | 2026-02-15 02:43:52.817873 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-02-15 02:43:53.169294 | orchestrator | ok: [testbed-manager] 2026-02-15 02:43:53.169438 | orchestrator | 2026-02-15 02:43:53.169467 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-02-15 02:43:53.300048 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:43:53.300139 | orchestrator | 2026-02-15 02:43:53.300150 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-02-15 02:43:53.300159 | orchestrator | 2026-02-15 02:43:53.300167 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 02:43:55.161087 | orchestrator | ok: [testbed-manager] 2026-02-15 02:43:55.161167 | orchestrator | 2026-02-15 02:43:55.161176 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-02-15 02:43:55.274459 | orchestrator | included: osism.services.traefik for testbed-manager 2026-02-15 02:43:55.274555 | orchestrator | 2026-02-15 02:43:55.274568 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-02-15 02:43:55.355325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-02-15 02:43:55.355427 | orchestrator | 2026-02-15 02:43:55.355443 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-02-15 02:43:56.557359 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-02-15 02:43:56.557440 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-02-15 02:43:56.557450 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-02-15 02:43:56.557458 | orchestrator | 2026-02-15 02:43:56.557468 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-02-15 02:43:58.540744 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-02-15 02:43:58.540850 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-02-15 02:43:58.540861 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-02-15 02:43:58.540869 | orchestrator | 2026-02-15 02:43:58.540877 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-02-15 02:43:59.237324 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-15 02:43:59.237428 | orchestrator | changed: [testbed-manager] 2026-02-15 02:43:59.237444 | orchestrator | 2026-02-15 02:43:59.237457 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-02-15 02:43:59.945871 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-15 02:43:59.945996 | orchestrator | changed: [testbed-manager] 2026-02-15 02:43:59.946097 | orchestrator | 2026-02-15 02:43:59.946120 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-02-15 02:44:00.009105 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:44:00.009226 | orchestrator | 2026-02-15 02:44:00.009313 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-02-15 02:44:00.407611 | orchestrator | ok: [testbed-manager] 2026-02-15 02:44:00.407680 | orchestrator | 2026-02-15 02:44:00.407687 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-02-15 02:44:00.482187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-02-15 02:44:00.482335 | orchestrator | 2026-02-15 02:44:00.482361 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-02-15 02:44:01.699800 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:01.699893 | orchestrator | 2026-02-15 02:44:01.699906 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-02-15 02:44:02.599587 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:02.599674 | orchestrator | 2026-02-15 02:44:02.599688 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-02-15 02:44:17.226918 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:17.227043 | orchestrator | 2026-02-15 02:44:17.227061 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-02-15 02:44:17.291119 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:44:17.291188 | orchestrator | 2026-02-15 02:44:17.291210 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-02-15 02:44:17.291215 | orchestrator | 2026-02-15 02:44:17.291219 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 02:44:19.221496 | orchestrator | ok: [testbed-manager] 2026-02-15 02:44:19.221603 | orchestrator | 2026-02-15 02:44:19.221621 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-02-15 02:44:19.357664 | orchestrator | included: osism.services.manager for testbed-manager 2026-02-15 02:44:19.357765 | orchestrator | 2026-02-15 02:44:19.357785 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-15 02:44:19.427693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-15 02:44:19.427795 | orchestrator | 2026-02-15 02:44:19.427811 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-15 02:44:22.198113 | orchestrator | ok: [testbed-manager] 2026-02-15 02:44:22.198218 | orchestrator | 2026-02-15 02:44:22.198233 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-15 02:44:22.251469 | orchestrator | ok: [testbed-manager] 2026-02-15 02:44:22.251567 | orchestrator | 2026-02-15 02:44:22.251582 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-15 02:44:22.416987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-15 02:44:22.417087 | orchestrator | 2026-02-15 02:44:22.417103 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-15 02:44:25.511512 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-02-15 02:44:25.511586 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-02-15 02:44:25.511591 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-15 02:44:25.511596 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-02-15 02:44:25.511600 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-15 02:44:25.511605 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-15 02:44:25.511609 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-15 02:44:25.511613 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-02-15 02:44:25.511617 | orchestrator | 2026-02-15 02:44:25.511622 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-15 02:44:26.185202 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:26.185332 | orchestrator | 2026-02-15 02:44:26.185359 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-15 02:44:26.883648 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:26.883743 | orchestrator | 2026-02-15 02:44:26.883759 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-15 02:44:26.976273 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-15 02:44:26.976362 | orchestrator | 2026-02-15 02:44:26.976375 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-15 02:44:28.319134 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-02-15 02:44:28.319260 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-02-15 02:44:28.319285 | orchestrator | 2026-02-15 02:44:28.319306 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-15 02:44:29.001895 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:29.002003 | orchestrator | 2026-02-15 02:44:29.002085 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-15 02:44:29.064857 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:44:29.064977 | orchestrator | 2026-02-15 02:44:29.064996 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-15 02:44:29.156908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-15 02:44:29.156989 | orchestrator | 2026-02-15 02:44:29.156999 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-15 02:44:29.857064 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:29.857171 | orchestrator | 2026-02-15 02:44:29.857186 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-15 02:44:29.943624 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-15 02:44:29.943728 | orchestrator | 2026-02-15 02:44:29.943753 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-15 02:44:31.380856 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-15 02:44:31.380959 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-15 02:44:31.380974 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:31.380987 | orchestrator | 2026-02-15 02:44:31.380999 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-15 02:44:32.087216 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:32.087342 | orchestrator | 2026-02-15 02:44:32.087367 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-15 02:44:32.151629 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:44:32.151731 | orchestrator | 2026-02-15 02:44:32.151748 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-15 02:44:32.268544 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-15 02:44:32.268653 | orchestrator | 2026-02-15 02:44:32.268668 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-15 02:44:32.969606 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:32.969731 | orchestrator | 2026-02-15 02:44:32.969759 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-15 02:44:33.401791 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:33.401921 | orchestrator | 2026-02-15 02:44:33.401937 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-15 02:44:34.708954 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-02-15 02:44:34.709085 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-02-15 02:44:34.709096 | orchestrator | 2026-02-15 02:44:34.709106 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-15 02:44:35.415539 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:35.415686 | orchestrator | 2026-02-15 02:44:35.415704 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-15 02:44:35.817286 | orchestrator | ok: [testbed-manager] 2026-02-15 02:44:35.817411 | orchestrator | 2026-02-15 02:44:35.817429 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-15 02:44:36.206116 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:36.206209 | orchestrator | 2026-02-15 02:44:36.206222 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-15 02:44:36.264715 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:44:36.264813 | orchestrator | 2026-02-15 02:44:36.264829 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-15 02:44:36.368985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-15 02:44:36.369128 | orchestrator | 2026-02-15 02:44:36.369145 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-15 02:44:36.419898 | orchestrator | ok: [testbed-manager] 2026-02-15 02:44:36.419992 | orchestrator | 2026-02-15 02:44:36.420006 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-15 02:44:38.654953 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-02-15 02:44:38.655037 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-02-15 02:44:38.655050 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-02-15 02:44:38.655059 | orchestrator | 2026-02-15 02:44:38.655069 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-15 02:44:39.406788 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:39.406874 | orchestrator | 2026-02-15 02:44:39.406886 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-15 02:44:40.196817 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:40.196910 | orchestrator | 2026-02-15 02:44:40.196919 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-15 02:44:41.016579 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:41.016652 | orchestrator | 2026-02-15 02:44:41.016661 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-15 02:44:41.085094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-15 02:44:41.085182 | orchestrator | 2026-02-15 02:44:41.085195 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-15 02:44:41.126390 | orchestrator | ok: [testbed-manager] 2026-02-15 02:44:41.126570 | orchestrator | 2026-02-15 02:44:41.126587 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-15 02:44:41.855233 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-02-15 02:44:41.855347 | orchestrator | 2026-02-15 02:44:41.855363 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-15 02:44:41.972787 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-15 02:44:41.972908 | orchestrator | 2026-02-15 02:44:41.972924 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-15 02:44:42.702904 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:42.702994 | orchestrator | 2026-02-15 02:44:42.703006 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-15 02:44:43.361419 | orchestrator | ok: [testbed-manager] 2026-02-15 02:44:43.361508 | orchestrator | 2026-02-15 02:44:43.361520 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-15 02:44:43.414442 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:44:43.414523 | orchestrator | 2026-02-15 02:44:43.414534 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-15 02:44:43.489088 | orchestrator | ok: [testbed-manager] 2026-02-15 02:44:43.489204 | orchestrator | 2026-02-15 02:44:43.489229 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-15 02:44:44.395064 | orchestrator | changed: [testbed-manager] 2026-02-15 02:44:44.395193 | orchestrator | 2026-02-15 02:44:44.395220 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-15 02:46:05.843929 | orchestrator | changed: [testbed-manager] 2026-02-15 02:46:05.844077 | orchestrator | 2026-02-15 02:46:05.844096 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-15 02:46:06.915027 | orchestrator | ok: [testbed-manager] 2026-02-15 02:46:06.915202 | orchestrator | 2026-02-15 02:46:06.915220 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-15 02:46:06.977006 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:46:06.977137 | orchestrator | 2026-02-15 02:46:06.977153 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-15 02:46:09.500628 | orchestrator | changed: [testbed-manager] 2026-02-15 02:46:09.500727 | orchestrator | 2026-02-15 02:46:09.500744 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-15 02:46:09.554199 | orchestrator | ok: [testbed-manager] 2026-02-15 02:46:09.554305 | orchestrator | 2026-02-15 02:46:09.554319 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-15 02:46:09.554329 | orchestrator | 2026-02-15 02:46:09.554339 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-15 02:46:09.723015 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:46:09.723120 | orchestrator | 2026-02-15 02:46:09.723132 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-15 02:47:09.792959 | orchestrator | Pausing for 60 seconds 2026-02-15 02:47:09.793106 | orchestrator | changed: [testbed-manager] 2026-02-15 02:47:09.793129 | orchestrator | 2026-02-15 02:47:09.793149 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-15 02:47:12.931306 | orchestrator | changed: [testbed-manager] 2026-02-15 02:47:12.931494 | orchestrator | 2026-02-15 02:47:12.931523 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-15 02:48:15.257392 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-15 02:48:15.257518 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-15 02:48:15.257560 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-15 02:48:15.257572 | orchestrator | changed: [testbed-manager] 2026-02-15 02:48:15.257582 | orchestrator | 2026-02-15 02:48:15.257591 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-15 02:48:27.417930 | orchestrator | changed: [testbed-manager] 2026-02-15 02:48:27.418100 | orchestrator | 2026-02-15 02:48:27.418119 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-15 02:48:27.512648 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-15 02:48:27.512768 | orchestrator | 2026-02-15 02:48:27.512793 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-15 02:48:27.512872 | orchestrator | 2026-02-15 02:48:27.512893 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-15 02:48:27.560704 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:48:27.560865 | orchestrator | 2026-02-15 02:48:27.560892 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-15 02:48:27.645790 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-15 02:48:27.645890 | orchestrator | 2026-02-15 02:48:27.645900 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-15 02:48:28.544734 | orchestrator | changed: [testbed-manager] 2026-02-15 02:48:28.544833 | orchestrator | 2026-02-15 02:48:28.544841 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-15 02:48:32.017606 | orchestrator | ok: [testbed-manager] 2026-02-15 02:48:32.017710 | orchestrator | 2026-02-15 02:48:32.017725 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-15 02:48:32.101244 | orchestrator | ok: [testbed-manager] => { 2026-02-15 02:48:32.101328 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-15 02:48:32.101340 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-15 02:48:32.101350 | orchestrator | "Checking running containers against expected versions...", 2026-02-15 02:48:32.101359 | orchestrator | "", 2026-02-15 02:48:32.101368 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-15 02:48:32.101376 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-15 02:48:32.101385 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.101393 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251130.0", 2026-02-15 02:48:32.101401 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.101409 | orchestrator | "", 2026-02-15 02:48:32.101417 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-15 02:48:32.101442 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-15 02:48:32.101450 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.101459 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251130.0", 2026-02-15 02:48:32.101467 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.101475 | orchestrator | "", 2026-02-15 02:48:32.101483 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-15 02:48:32.101491 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-15 02:48:32.101499 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.101507 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251130.0", 2026-02-15 02:48:32.101515 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.101523 | orchestrator | "", 2026-02-15 02:48:32.101531 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-15 02:48:32.101539 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-15 02:48:32.101547 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.101554 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251130.0", 2026-02-15 02:48:32.101562 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.101570 | orchestrator | "", 2026-02-15 02:48:32.101580 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-15 02:48:32.101588 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-15 02:48:32.101596 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.101604 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251130.0", 2026-02-15 02:48:32.101612 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.101620 | orchestrator | "", 2026-02-15 02:48:32.101632 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-15 02:48:32.101645 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.101658 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.101679 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.101692 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.101704 | orchestrator | "", 2026-02-15 02:48:32.101715 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-15 02:48:32.101727 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-15 02:48:32.101740 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.101752 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-15 02:48:32.101764 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.101777 | orchestrator | "", 2026-02-15 02:48:32.101789 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-15 02:48:32.101803 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-15 02:48:32.101816 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.101853 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-15 02:48:32.101866 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.101880 | orchestrator | "", 2026-02-15 02:48:32.101895 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-15 02:48:32.101905 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-15 02:48:32.101914 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.101923 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251130.1", 2026-02-15 02:48:32.101931 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.101940 | orchestrator | "", 2026-02-15 02:48:32.101950 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-15 02:48:32.101958 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-15 02:48:32.101967 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.101977 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-15 02:48:32.101986 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.101994 | orchestrator | "", 2026-02-15 02:48:32.102003 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-15 02:48:32.102088 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.102099 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.102108 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.102117 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.102125 | orchestrator | "", 2026-02-15 02:48:32.102135 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-15 02:48:32.102144 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.102152 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.102162 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.102171 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.102180 | orchestrator | "", 2026-02-15 02:48:32.102190 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-15 02:48:32.102199 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.102208 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.102217 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.102225 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.102233 | orchestrator | "", 2026-02-15 02:48:32.102241 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-15 02:48:32.102249 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.102256 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.102264 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.102287 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.102295 | orchestrator | "", 2026-02-15 02:48:32.102303 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-15 02:48:32.102311 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.102326 | orchestrator | " Enabled: true", 2026-02-15 02:48:32.102334 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251130.1", 2026-02-15 02:48:32.102341 | orchestrator | " Status: ✅ MATCH", 2026-02-15 02:48:32.102349 | orchestrator | "", 2026-02-15 02:48:32.102357 | orchestrator | "=== Summary ===", 2026-02-15 02:48:32.102365 | orchestrator | "Errors (version mismatches): 0", 2026-02-15 02:48:32.102373 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-15 02:48:32.102381 | orchestrator | "", 2026-02-15 02:48:32.102389 | orchestrator | "✅ All running containers match expected versions!" 2026-02-15 02:48:32.102397 | orchestrator | ] 2026-02-15 02:48:32.102405 | orchestrator | } 2026-02-15 02:48:32.102414 | orchestrator | 2026-02-15 02:48:32.102422 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-15 02:48:32.169098 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:48:32.169215 | orchestrator | 2026-02-15 02:48:32.169237 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 02:48:32.169256 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-02-15 02:48:32.169272 | orchestrator | 2026-02-15 02:48:32.296256 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-15 02:48:32.296354 | orchestrator | + deactivate 2026-02-15 02:48:32.296371 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-15 02:48:32.296386 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 02:48:32.296397 | orchestrator | + export PATH 2026-02-15 02:48:32.296408 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-15 02:48:32.296419 | orchestrator | + '[' -n '' ']' 2026-02-15 02:48:32.296430 | orchestrator | + hash -r 2026-02-15 02:48:32.296441 | orchestrator | + '[' -n '' ']' 2026-02-15 02:48:32.296451 | orchestrator | + unset VIRTUAL_ENV 2026-02-15 02:48:32.296462 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-15 02:48:32.296473 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-15 02:48:32.296483 | orchestrator | + unset -f deactivate 2026-02-15 02:48:32.296495 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-02-15 02:48:32.302280 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-15 02:48:32.302397 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-15 02:48:32.302438 | orchestrator | + local max_attempts=60 2026-02-15 02:48:32.302450 | orchestrator | + local name=ceph-ansible 2026-02-15 02:48:32.302461 | orchestrator | + local attempt_num=1 2026-02-15 02:48:32.302538 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 02:48:32.332483 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-15 02:48:32.332598 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-15 02:48:32.332676 | orchestrator | + local max_attempts=60 2026-02-15 02:48:32.332699 | orchestrator | + local name=kolla-ansible 2026-02-15 02:48:32.332717 | orchestrator | + local attempt_num=1 2026-02-15 02:48:32.332922 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-15 02:48:32.374601 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-15 02:48:32.374695 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-15 02:48:32.374708 | orchestrator | + local max_attempts=60 2026-02-15 02:48:32.374720 | orchestrator | + local name=osism-ansible 2026-02-15 02:48:32.374731 | orchestrator | + local attempt_num=1 2026-02-15 02:48:32.375918 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-15 02:48:32.419957 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-15 02:48:32.420063 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-15 02:48:32.420086 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-15 02:48:33.141151 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-15 02:48:33.342951 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-15 02:48:33.343020 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-02-15 02:48:33.343027 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-02-15 02:48:33.343031 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-15 02:48:33.343037 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-02-15 02:48:33.343055 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-02-15 02:48:33.343059 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-02-15 02:48:33.343063 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-02-15 02:48:33.343067 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-02-15 02:48:33.343071 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-02-15 02:48:33.343075 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-02-15 02:48:33.343079 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-02-15 02:48:33.343082 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-02-15 02:48:33.343099 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-02-15 02:48:33.343103 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-02-15 02:48:33.343108 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-02-15 02:48:33.348399 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-15 02:48:33.395384 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-15 02:48:33.395494 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-02-15 02:48:33.400711 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-02-15 02:48:45.825344 | orchestrator | 2026-02-15 02:48:45 | INFO  | Task 7867384f-563c-43e3-84f0-711a23cbd0d5 (resolvconf) was prepared for execution. 2026-02-15 02:48:45.825462 | orchestrator | 2026-02-15 02:48:45 | INFO  | It takes a moment until task 7867384f-563c-43e3-84f0-711a23cbd0d5 (resolvconf) has been started and output is visible here. 2026-02-15 02:49:01.045158 | orchestrator | 2026-02-15 02:49:01.045301 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-02-15 02:49:01.045331 | orchestrator | 2026-02-15 02:49:01.045352 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 02:49:01.045372 | orchestrator | Sunday 15 February 2026 02:48:50 +0000 (0:00:00.154) 0:00:00.154 ******* 2026-02-15 02:49:01.045385 | orchestrator | ok: [testbed-manager] 2026-02-15 02:49:01.045397 | orchestrator | 2026-02-15 02:49:01.045408 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-15 02:49:01.045420 | orchestrator | Sunday 15 February 2026 02:48:54 +0000 (0:00:04.075) 0:00:04.230 ******* 2026-02-15 02:49:01.045431 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:49:01.045443 | orchestrator | 2026-02-15 02:49:01.045455 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-15 02:49:01.045466 | orchestrator | Sunday 15 February 2026 02:48:54 +0000 (0:00:00.082) 0:00:04.312 ******* 2026-02-15 02:49:01.045478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-02-15 02:49:01.045499 | orchestrator | 2026-02-15 02:49:01.045517 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-15 02:49:01.045536 | orchestrator | Sunday 15 February 2026 02:48:54 +0000 (0:00:00.085) 0:00:04.397 ******* 2026-02-15 02:49:01.045577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-02-15 02:49:01.045598 | orchestrator | 2026-02-15 02:49:01.045616 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-15 02:49:01.045635 | orchestrator | Sunday 15 February 2026 02:48:54 +0000 (0:00:00.089) 0:00:04.487 ******* 2026-02-15 02:49:01.045653 | orchestrator | ok: [testbed-manager] 2026-02-15 02:49:01.045669 | orchestrator | 2026-02-15 02:49:01.045688 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-15 02:49:01.045705 | orchestrator | Sunday 15 February 2026 02:48:55 +0000 (0:00:01.263) 0:00:05.750 ******* 2026-02-15 02:49:01.045723 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:49:01.045742 | orchestrator | 2026-02-15 02:49:01.045762 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-15 02:49:01.045781 | orchestrator | Sunday 15 February 2026 02:48:55 +0000 (0:00:00.073) 0:00:05.824 ******* 2026-02-15 02:49:01.045832 | orchestrator | ok: [testbed-manager] 2026-02-15 02:49:01.045852 | orchestrator | 2026-02-15 02:49:01.045871 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-15 02:49:01.045890 | orchestrator | Sunday 15 February 2026 02:48:56 +0000 (0:00:00.559) 0:00:06.383 ******* 2026-02-15 02:49:01.045908 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:49:01.045928 | orchestrator | 2026-02-15 02:49:01.045948 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-15 02:49:01.046002 | orchestrator | Sunday 15 February 2026 02:48:56 +0000 (0:00:00.085) 0:00:06.469 ******* 2026-02-15 02:49:01.046084 | orchestrator | changed: [testbed-manager] 2026-02-15 02:49:01.046098 | orchestrator | 2026-02-15 02:49:01.046110 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-15 02:49:01.046121 | orchestrator | Sunday 15 February 2026 02:48:57 +0000 (0:00:00.593) 0:00:07.062 ******* 2026-02-15 02:49:01.046132 | orchestrator | changed: [testbed-manager] 2026-02-15 02:49:01.046142 | orchestrator | 2026-02-15 02:49:01.046168 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-15 02:49:01.046191 | orchestrator | Sunday 15 February 2026 02:48:58 +0000 (0:00:01.178) 0:00:08.241 ******* 2026-02-15 02:49:01.046203 | orchestrator | ok: [testbed-manager] 2026-02-15 02:49:01.046214 | orchestrator | 2026-02-15 02:49:01.046225 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-15 02:49:01.046236 | orchestrator | Sunday 15 February 2026 02:48:59 +0000 (0:00:01.071) 0:00:09.313 ******* 2026-02-15 02:49:01.046247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-02-15 02:49:01.046258 | orchestrator | 2026-02-15 02:49:01.046269 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-15 02:49:01.046280 | orchestrator | Sunday 15 February 2026 02:48:59 +0000 (0:00:00.091) 0:00:09.405 ******* 2026-02-15 02:49:01.046291 | orchestrator | changed: [testbed-manager] 2026-02-15 02:49:01.046301 | orchestrator | 2026-02-15 02:49:01.046312 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 02:49:01.046324 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 02:49:01.046335 | orchestrator | 2026-02-15 02:49:01.046346 | orchestrator | 2026-02-15 02:49:01.046357 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 02:49:01.046368 | orchestrator | Sunday 15 February 2026 02:49:00 +0000 (0:00:01.253) 0:00:10.658 ******* 2026-02-15 02:49:01.046379 | orchestrator | =============================================================================== 2026-02-15 02:49:01.046389 | orchestrator | Gathering Facts --------------------------------------------------------- 4.08s 2026-02-15 02:49:01.046400 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.26s 2026-02-15 02:49:01.046411 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.25s 2026-02-15 02:49:01.046422 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.18s 2026-02-15 02:49:01.046432 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.07s 2026-02-15 02:49:01.046443 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.59s 2026-02-15 02:49:01.046477 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.56s 2026-02-15 02:49:01.046489 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-02-15 02:49:01.046500 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-02-15 02:49:01.046511 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-02-15 02:49:01.046522 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-02-15 02:49:01.046533 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2026-02-15 02:49:01.046556 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2026-02-15 02:49:01.397712 | orchestrator | + osism apply sshconfig 2026-02-15 02:49:13.682478 | orchestrator | 2026-02-15 02:49:13 | INFO  | Task d14e7e6d-47d8-484e-ace4-ce3e0394fdf5 (sshconfig) was prepared for execution. 2026-02-15 02:49:13.682585 | orchestrator | 2026-02-15 02:49:13 | INFO  | It takes a moment until task d14e7e6d-47d8-484e-ace4-ce3e0394fdf5 (sshconfig) has been started and output is visible here. 2026-02-15 02:49:26.615573 | orchestrator | 2026-02-15 02:49:26.615691 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-02-15 02:49:26.615711 | orchestrator | 2026-02-15 02:49:26.615745 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-02-15 02:49:26.615759 | orchestrator | Sunday 15 February 2026 02:49:18 +0000 (0:00:00.184) 0:00:00.184 ******* 2026-02-15 02:49:26.615772 | orchestrator | ok: [testbed-manager] 2026-02-15 02:49:26.615786 | orchestrator | 2026-02-15 02:49:26.615801 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-02-15 02:49:26.615814 | orchestrator | Sunday 15 February 2026 02:49:18 +0000 (0:00:00.615) 0:00:00.800 ******* 2026-02-15 02:49:26.615828 | orchestrator | changed: [testbed-manager] 2026-02-15 02:49:26.615843 | orchestrator | 2026-02-15 02:49:26.615855 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-02-15 02:49:26.615868 | orchestrator | Sunday 15 February 2026 02:49:19 +0000 (0:00:00.546) 0:00:01.346 ******* 2026-02-15 02:49:26.615881 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-02-15 02:49:26.615894 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-02-15 02:49:26.615908 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-02-15 02:49:26.615921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-02-15 02:49:26.615933 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-02-15 02:49:26.615947 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-02-15 02:49:26.615960 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-02-15 02:49:26.615974 | orchestrator | 2026-02-15 02:49:26.615988 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-02-15 02:49:26.616001 | orchestrator | Sunday 15 February 2026 02:49:25 +0000 (0:00:06.151) 0:00:07.498 ******* 2026-02-15 02:49:26.616014 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:49:26.616026 | orchestrator | 2026-02-15 02:49:26.616038 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-02-15 02:49:26.616052 | orchestrator | Sunday 15 February 2026 02:49:25 +0000 (0:00:00.087) 0:00:07.585 ******* 2026-02-15 02:49:26.616065 | orchestrator | changed: [testbed-manager] 2026-02-15 02:49:26.616078 | orchestrator | 2026-02-15 02:49:26.616145 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 02:49:26.616161 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 02:49:26.616176 | orchestrator | 2026-02-15 02:49:26.616190 | orchestrator | 2026-02-15 02:49:26.616295 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 02:49:26.616311 | orchestrator | Sunday 15 February 2026 02:49:26 +0000 (0:00:00.627) 0:00:08.213 ******* 2026-02-15 02:49:26.616326 | orchestrator | =============================================================================== 2026-02-15 02:49:26.616340 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 6.15s 2026-02-15 02:49:26.616354 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.63s 2026-02-15 02:49:26.616368 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.62s 2026-02-15 02:49:26.616380 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.55s 2026-02-15 02:49:26.616420 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.09s 2026-02-15 02:49:26.964811 | orchestrator | + osism apply known-hosts 2026-02-15 02:49:39.267928 | orchestrator | 2026-02-15 02:49:39 | INFO  | Task 0fc88a61-0b47-466e-ad25-4d637789d89b (known-hosts) was prepared for execution. 2026-02-15 02:49:39.268065 | orchestrator | 2026-02-15 02:49:39 | INFO  | It takes a moment until task 0fc88a61-0b47-466e-ad25-4d637789d89b (known-hosts) has been started and output is visible here. 2026-02-15 02:49:57.791932 | orchestrator | 2026-02-15 02:49:57.792027 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-02-15 02:49:57.792038 | orchestrator | 2026-02-15 02:49:57.792047 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-02-15 02:49:57.792055 | orchestrator | Sunday 15 February 2026 02:49:43 +0000 (0:00:00.178) 0:00:00.178 ******* 2026-02-15 02:49:57.792063 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-15 02:49:57.792072 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-15 02:49:57.792079 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-15 02:49:57.792087 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-15 02:49:57.792094 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-15 02:49:57.792102 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-15 02:49:57.792109 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-15 02:49:57.792116 | orchestrator | 2026-02-15 02:49:57.792124 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-02-15 02:49:57.792132 | orchestrator | Sunday 15 February 2026 02:49:50 +0000 (0:00:06.278) 0:00:06.456 ******* 2026-02-15 02:49:57.792140 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-15 02:49:57.792150 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-15 02:49:57.792157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-15 02:49:57.792165 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-15 02:49:57.792172 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-15 02:49:57.792189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-15 02:49:57.792196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-15 02:49:57.792204 | orchestrator | 2026-02-15 02:49:57.792211 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:49:57.792276 | orchestrator | Sunday 15 February 2026 02:49:50 +0000 (0:00:00.163) 0:00:06.620 ******* 2026-02-15 02:49:57.792295 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAAGshswXWgsZMWTmoJhw9X/KUE42Lff9llnSsqgvXKf) 2026-02-15 02:49:57.792311 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvHTNDstUJ3OG214ZabceJWRghvoJVmC2yCa/5lfKeE/1wSstXFVKe8Xqt4dqWiZTlg/eRO3YBZMw5ZVtUTF+VwOdg0e5qCf/sG/6wQT3tzc86/c33dTxihFshFrIoqLKkKQkqqSmsUEVAMv4elUvD1O9bugd25naLu2KpMTEqt8kGUaxYurmOqwxzulOunXW50SNJYCuINpmbamhxxQ4PcOI+yIZOYgfS9QYwGqVPckK2HKeFOErQBBpdlseTg5cur4FDOb8HnQyGpx7GARgqqZJLJAULq0HB35crxVnZSxMP0HLRIWbqiZtOzZdTESUHu1oroarwvb2v1gat546JMH8Xzo4M9Hn0jvsunfrmL1RaA3oZdK/EYRznZvntGe9/dOGcbhEUJoLSxAHKBvc2o4REWDYXkHuFt/smz2lLm1ns5Z1hw1vza4DgjKeAJiS2fGKEGSsvnQTJthckxnBfWhgu+GkZm6HmqqSKLUol6JXj9+6JWZdiOtVR+CaWNvs=) 2026-02-15 02:49:57.792340 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMKEAycDJC3BPswcCdew6/IFyK5V+8Ngvw8uGLFZtMUrkOG0Hds5F/USF6TLH0RUievZ8K1+gMmqtBVSwPnShp0=) 2026-02-15 02:49:57.792349 | orchestrator | 2026-02-15 02:49:57.792357 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:49:57.792364 | orchestrator | Sunday 15 February 2026 02:49:51 +0000 (0:00:01.342) 0:00:07.963 ******* 2026-02-15 02:49:57.792371 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPugSSmqTpsChRln24EcgqCqQxkdbTIEUvHBn6bv6lwC) 2026-02-15 02:49:57.792403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCm8nqZ6kPzIj95QXeuGdkPHzEQ1esDHK0G80KGxTHkASxwugduWleYSqBzpvraCP/bmEV8LBqHKc0c6l/1KxVg214PgOauaKPRsCCCvZQvsMrscjV1CaX7TyENIoQQeAsonsCgNIVnACCk1V+J+OLBlwzpz3/8xtD0S3K/NgebNFiPJvyPZefqAcG/UmFMvHFBSPM14NFtqC4Rozsaa9o5K/QaHjHUmSlAnoyvAvqDo6phjdy+l33Mq+kp5Pvgg32a/Tz+YmxDjwLnnJDScL6hIn+KOO7dJv45z3b3gYrI/dmqH6TdDUvDdyf0A7aefBlHSFi/vheLYBceeQ0eyBzTE++JHLDAwWeLS5OIOvKyVrYubIUwNQLeub5hHfvIi0aK7KsyB5c+Vo4NEAgUbeJy5YarxN5KmMA+RgPMm4ApgGJg3L2mfa5MNAv7kXvwidzNN+kk16LmqYpSruv6i5qdRv4aLepHG4cpUIOVEd+DomwRpv/uhlYE9RqeWoRkoE=) 2026-02-15 02:49:57.792412 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnczmzpSls0/ejUn3J1ic44nsTe2EkY9MUOHk7wavt7mkMx4IDrd+QZ2B62TI/peV5GLwIOjJ59djS0CllU47M=) 2026-02-15 02:49:57.792420 | orchestrator | 2026-02-15 02:49:57.792427 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:49:57.792434 | orchestrator | Sunday 15 February 2026 02:49:52 +0000 (0:00:01.199) 0:00:09.162 ******* 2026-02-15 02:49:57.792442 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4FiHC517jz7mFBtha99JIuxe5q6bqCwzMPtKXY5vhqvnCbul/NnY+aAB8HLHJbYQryINDOqwRsX1mj9N+GNWAEpp+DzlXOnwHQVHpI1KMc/c1TtaOkDKh60KzGTCJCSZnSgV94i0V6HnRoEcj+7DVd9l0Z8c0+8iES0nzs7xOgexJXVxDi/qm7GyLRuYh26UPTnTaGFPWcSW2iSW8TEXGZIYvx5OZHdjSeDtVXjTj30QAccyh6lwB47zwWMHnJHBE/xlGcJLEXfZglbdg8sDRc8xIyOEcCed4BthXAOg389nYWGNAZj1U/waB1Wzoi8keYxG46BiLWT9D58iS5oOb0JIqpB0p+VM7sONlUt0VRjK3FEFQ3o8YcC1Q28CDbL8eWL9Nld18PTjIWM99SxZTVYAfFeaZM5YgI/6PIxKjT7kSY1QIqb7o5rAIat3TF5KRL8ZvP0YPKaWssX1qFlZGpcIyJDz7isOmgCcIovnnZzVl0Vy/xtXyBH7jnRQdeec=) 2026-02-15 02:49:57.792449 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMxIy6qV+mnmnNDz6Ahlcbw75MIMArEzwR9ktgy6G7BSokX+TSCdwVBQs/DLibZmzHy9RmiUcNs19ZPUNS3NIFw=) 2026-02-15 02:49:57.792457 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBe8xJaKXZDiR6PCEpVQbe6tZpWXsa/+mR9dY7sPcE+O) 2026-02-15 02:49:57.792465 | orchestrator | 2026-02-15 02:49:57.792472 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:49:57.792479 | orchestrator | Sunday 15 February 2026 02:49:54 +0000 (0:00:01.228) 0:00:10.390 ******* 2026-02-15 02:49:57.792487 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0k6bwmwZcj5rTHMC2zQRoDtV029ouHeXXIf9wXUiATgZhZJhmK+iJMJ6gm4pYNx0n7fCPIG1DlIorOJdb9LGP9CBDH37/8Y4rHWC2RQYuQyzF+Y2rtbPxmljI+LjgAOXowoDEKKE3CND4qlocmMUOcKecRuNNnrh3h09LdvizpyRhASzqQrQf/Pk5Gom1Uf9KDokp6ucFZSEAhyJ8aYdGEjRu3sk2Vp4cQ+BU9WJxMi+KfGEHmq6ERyYLvhuA7RUb3E5WfDfPfgX1xwHqYy+eTYYL+R9u8Ie4Ex/GbbnnvMB0Z//sXfrJdLpohaDLt+ZOJBvEtDuf66YbD9bfU6CRJR+lOdJWoM5qNWoB1OQfJnJYe43TMq5lzreNvCKzgWKsuO7Hea+dOXFojmphKdkyYhJlXL/gnJcaJiAihmhVotEVsnv5uUzXRmtPBNAog17Bf9rwhlaK/mm4mhQcgqsmMnUehsV/oGkuel2WWioksi8muvxTwcn2QIfdnjdzD70=) 2026-02-15 02:49:57.792500 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBONxCZ5SfkVpnSEV23oCafbZjclvFveWQTLvzQrGzbbrR692ecNHCHu3Lf15/iTCzaw3mlulHrFK79Xu2UXE4qY=) 2026-02-15 02:49:57.792508 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE2x8tsecxRrCkm2G8MMdRW4tfIwOl23M6Zoyi/h+hm6) 2026-02-15 02:49:57.792515 | orchestrator | 2026-02-15 02:49:57.792522 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:49:57.792529 | orchestrator | Sunday 15 February 2026 02:49:55 +0000 (0:00:01.193) 0:00:11.584 ******* 2026-02-15 02:49:57.792594 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDex+uu7Y3pEKqbznweW60yerHb4r7P5Pek2Hwm8Ugjlbwwfvqn6T5sXYQ3A1tYeDD4o7Rspirh/efTb1vfd+ql/VxGBhLwOVlUEskJ2HgaJBZWcErfRB5bbmM4K8Ra7biODegcN7xmE7nllDZU37G/e0TfR/0sVuJ2lY6J4LBtppoOOIQ3Owzu/a5vx7l0r+v5GC5SjXNAc8WHxl7aOPUpslKQcTyrQV8M2cFVh9kC0oxZIfH6Pl8a64zkjlcsh+V+B95F107IffyN/Yh62o9pcjfCFe9i0AilAv8PMET0gnEYdXEcZmxPzGX/8au1C6PLgoGxs1741R44SpAAb7b3QjxVODK4KQvyXDhcfQTFo1vKSEzpQ2IabZlvzgtrEmkXGBO+vkpgULeSB3r9FQd+T+dJJND749aj50AJAUPW3DGuXLGsaNgdwJoehJSYcJdpFXCgDvefv6yzj8N8ekWp/UO8JJgwNI2Ht3giF6/pJ0C+yMkX0cxMHzEmy/u4sz8=) 2026-02-15 02:49:57.792602 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPkW5ntYNlpiBTBZe4jzQ2V9WYdP6CyxprKUgZjRUglLyKX9b5WjQxILziB/ZotT/9/uPcfH7XPV2HQoh1GKT0=) 2026-02-15 02:49:57.792610 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL7hnkD/lQeektj6UiuONOH5kXTyWYAqt+xLnCgC54rY) 2026-02-15 02:49:57.792617 | orchestrator | 2026-02-15 02:49:57.792625 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:49:57.792632 | orchestrator | Sunday 15 February 2026 02:49:56 +0000 (0:00:01.329) 0:00:12.913 ******* 2026-02-15 02:49:57.792646 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCO0c5UvQBvwxtfIsoz8Jz36kCF0Q/sMijaYplJ5+znm6JItCWEZbOGeUPcxw4QRYOv5H3E3pfL13ssMPbJ+VYQS9IvxbfF+KCv5h+Mlfq6SF6h+8XDlx2CiW0fb0VbM3x5CUPBNJnPqgrN4QQ5UaYwlk8xzV7jl/fW+2vJZDclxs84uvEqALnHcKZjp2qY37so1XErANqB8JFK4D/V0wuZ9pNSQhngw7BClk4OJNchIlen9z98rl8Mlwj0rYgWLZSwhFF4SU7edETrjmVsNzGF/6SRnSaskIxcVLFJQzjbK8GUwfmCujTKkPhYE8pjQtc8M6idJH3AS9YMIoK5RBInhxA8xnwLnJepX7aXYtrnrU2UCw4f0/9v4/1wj95UjbIGqM+MYw3cjXPfSOoRSTdt5a+okzJxFch73/nXhGrn+LAbDRs9zDIvHILKHztHAPm+iCCo5y0fEWE6Er2QI2EBAqrmzqnCGBdS4MEnWfeNtJL/DzbYmS0D/JRnyR1Ao48=) 2026-02-15 02:50:09.404060 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBObBHycUNLAxKWQZNe9VjtZDOn6i9IDrbq6Itzfe75fX/X9LINu/4ShJujk6zIgUwHQziUnTwcwQrD+OlfAlt6E=) 2026-02-15 02:50:09.404158 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBizhhgLMpQTKBo138Ez+oL4L4NxgrTmKggrw6ehaa5e) 2026-02-15 02:50:09.404169 | orchestrator | 2026-02-15 02:50:09.404177 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:50:09.404187 | orchestrator | Sunday 15 February 2026 02:49:57 +0000 (0:00:01.220) 0:00:14.134 ******* 2026-02-15 02:50:09.404196 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiyHE2BpHoPPUfluLlgfisbpHyOIWp6Tbr2GDL/4seGwvDaqPwYNGiKbcTiSv+D7cmaFATk6m7Ue508ymdf36029EcNb+mIqDttxyh7ciT3pHLfyLCyR9+tWHCRw65vt7pUx9smbX7lsDks4Q6KJ5Z7OlkcCZWWrofv8x2iH+TlqqtYWx83A6D/m/bmvT60hOWTJULJMwfu7cl6cy4XqqIW4M47G/LWAKPHUVlLzs/fpe6AbuGSHpGMFry7umKLZdQtge0vqpElX57yuDioJ4vWrMehjOCofOqrhhifxqyARTbwfUvyNibMUnyCwCOk9TLT2IG5VRLKvwdYNHwEHuX5c8a07TgtXeFj/oLpzu3Q1HvXd0UGJhLryjJEq0ET/SqII22QnX6wngwyPw35PsxWe3HB3//NSwhQTv9mANdNGiKz/6N57o56fHzsRrL5zJaaWnYCMeH5cA4m9MEAMJb7vCrROzcON63A5RgNGvAROVWb2obz3DU8Xw7u+U0N78=) 2026-02-15 02:50:09.404204 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBTLlmXtYD+Ym5iwFuJ/j17+pEAQ+DM8823J3eunTAUoIe34KkiwENJKt98ribjGXpFOxFc/Z9WhoHxIeTfyNZ0=) 2026-02-15 02:50:09.404231 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHtMXIg0tl8n/dpBftt9w3a6z5p+f4v/k0uMy37gEzd1) 2026-02-15 02:50:09.404239 | orchestrator | 2026-02-15 02:50:09.404246 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-02-15 02:50:09.404254 | orchestrator | Sunday 15 February 2026 02:49:58 +0000 (0:00:01.150) 0:00:15.284 ******* 2026-02-15 02:50:09.404262 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-02-15 02:50:09.404312 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-02-15 02:50:09.404320 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-02-15 02:50:09.404327 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-02-15 02:50:09.404334 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-15 02:50:09.404341 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-15 02:50:09.404348 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-15 02:50:09.404354 | orchestrator | 2026-02-15 02:50:09.404361 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-02-15 02:50:09.404369 | orchestrator | Sunday 15 February 2026 02:50:04 +0000 (0:00:05.546) 0:00:20.831 ******* 2026-02-15 02:50:09.404377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-02-15 02:50:09.404386 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-02-15 02:50:09.404393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-02-15 02:50:09.404399 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-02-15 02:50:09.404406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-02-15 02:50:09.404413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-02-15 02:50:09.404420 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-02-15 02:50:09.404427 | orchestrator | 2026-02-15 02:50:09.404433 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:50:09.404439 | orchestrator | Sunday 15 February 2026 02:50:04 +0000 (0:00:00.193) 0:00:21.024 ******* 2026-02-15 02:50:09.404445 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMKEAycDJC3BPswcCdew6/IFyK5V+8Ngvw8uGLFZtMUrkOG0Hds5F/USF6TLH0RUievZ8K1+gMmqtBVSwPnShp0=) 2026-02-15 02:50:09.404468 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCvHTNDstUJ3OG214ZabceJWRghvoJVmC2yCa/5lfKeE/1wSstXFVKe8Xqt4dqWiZTlg/eRO3YBZMw5ZVtUTF+VwOdg0e5qCf/sG/6wQT3tzc86/c33dTxihFshFrIoqLKkKQkqqSmsUEVAMv4elUvD1O9bugd25naLu2KpMTEqt8kGUaxYurmOqwxzulOunXW50SNJYCuINpmbamhxxQ4PcOI+yIZOYgfS9QYwGqVPckK2HKeFOErQBBpdlseTg5cur4FDOb8HnQyGpx7GARgqqZJLJAULq0HB35crxVnZSxMP0HLRIWbqiZtOzZdTESUHu1oroarwvb2v1gat546JMH8Xzo4M9Hn0jvsunfrmL1RaA3oZdK/EYRznZvntGe9/dOGcbhEUJoLSxAHKBvc2o4REWDYXkHuFt/smz2lLm1ns5Z1hw1vza4DgjKeAJiS2fGKEGSsvnQTJthckxnBfWhgu+GkZm6HmqqSKLUol6JXj9+6JWZdiOtVR+CaWNvs=) 2026-02-15 02:50:09.404496 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAAGshswXWgsZMWTmoJhw9X/KUE42Lff9llnSsqgvXKf) 2026-02-15 02:50:09.404503 | orchestrator | 2026-02-15 02:50:09.404510 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:50:09.404517 | orchestrator | Sunday 15 February 2026 02:50:05 +0000 (0:00:01.145) 0:00:22.170 ******* 2026-02-15 02:50:09.404528 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDCm8nqZ6kPzIj95QXeuGdkPHzEQ1esDHK0G80KGxTHkASxwugduWleYSqBzpvraCP/bmEV8LBqHKc0c6l/1KxVg214PgOauaKPRsCCCvZQvsMrscjV1CaX7TyENIoQQeAsonsCgNIVnACCk1V+J+OLBlwzpz3/8xtD0S3K/NgebNFiPJvyPZefqAcG/UmFMvHFBSPM14NFtqC4Rozsaa9o5K/QaHjHUmSlAnoyvAvqDo6phjdy+l33Mq+kp5Pvgg32a/Tz+YmxDjwLnnJDScL6hIn+KOO7dJv45z3b3gYrI/dmqH6TdDUvDdyf0A7aefBlHSFi/vheLYBceeQ0eyBzTE++JHLDAwWeLS5OIOvKyVrYubIUwNQLeub5hHfvIi0aK7KsyB5c+Vo4NEAgUbeJy5YarxN5KmMA+RgPMm4ApgGJg3L2mfa5MNAv7kXvwidzNN+kk16LmqYpSruv6i5qdRv4aLepHG4cpUIOVEd+DomwRpv/uhlYE9RqeWoRkoE=) 2026-02-15 02:50:09.404535 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnczmzpSls0/ejUn3J1ic44nsTe2EkY9MUOHk7wavt7mkMx4IDrd+QZ2B62TI/peV5GLwIOjJ59djS0CllU47M=) 2026-02-15 02:50:09.404542 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPugSSmqTpsChRln24EcgqCqQxkdbTIEUvHBn6bv6lwC) 2026-02-15 02:50:09.404549 | orchestrator | 2026-02-15 02:50:09.404555 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:50:09.404562 | orchestrator | Sunday 15 February 2026 02:50:06 +0000 (0:00:01.186) 0:00:23.356 ******* 2026-02-15 02:50:09.404569 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMxIy6qV+mnmnNDz6Ahlcbw75MIMArEzwR9ktgy6G7BSokX+TSCdwVBQs/DLibZmzHy9RmiUcNs19ZPUNS3NIFw=) 2026-02-15 02:50:09.404576 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBe8xJaKXZDiR6PCEpVQbe6tZpWXsa/+mR9dY7sPcE+O) 2026-02-15 02:50:09.404583 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4FiHC517jz7mFBtha99JIuxe5q6bqCwzMPtKXY5vhqvnCbul/NnY+aAB8HLHJbYQryINDOqwRsX1mj9N+GNWAEpp+DzlXOnwHQVHpI1KMc/c1TtaOkDKh60KzGTCJCSZnSgV94i0V6HnRoEcj+7DVd9l0Z8c0+8iES0nzs7xOgexJXVxDi/qm7GyLRuYh26UPTnTaGFPWcSW2iSW8TEXGZIYvx5OZHdjSeDtVXjTj30QAccyh6lwB47zwWMHnJHBE/xlGcJLEXfZglbdg8sDRc8xIyOEcCed4BthXAOg389nYWGNAZj1U/waB1Wzoi8keYxG46BiLWT9D58iS5oOb0JIqpB0p+VM7sONlUt0VRjK3FEFQ3o8YcC1Q28CDbL8eWL9Nld18PTjIWM99SxZTVYAfFeaZM5YgI/6PIxKjT7kSY1QIqb7o5rAIat3TF5KRL8ZvP0YPKaWssX1qFlZGpcIyJDz7isOmgCcIovnnZzVl0Vy/xtXyBH7jnRQdeec=) 2026-02-15 02:50:09.404590 | orchestrator | 2026-02-15 02:50:09.404597 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:50:09.404604 | orchestrator | Sunday 15 February 2026 02:50:08 +0000 (0:00:01.193) 0:00:24.549 ******* 2026-02-15 02:50:09.404610 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIE2x8tsecxRrCkm2G8MMdRW4tfIwOl23M6Zoyi/h+hm6) 2026-02-15 02:50:09.404617 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0k6bwmwZcj5rTHMC2zQRoDtV029ouHeXXIf9wXUiATgZhZJhmK+iJMJ6gm4pYNx0n7fCPIG1DlIorOJdb9LGP9CBDH37/8Y4rHWC2RQYuQyzF+Y2rtbPxmljI+LjgAOXowoDEKKE3CND4qlocmMUOcKecRuNNnrh3h09LdvizpyRhASzqQrQf/Pk5Gom1Uf9KDokp6ucFZSEAhyJ8aYdGEjRu3sk2Vp4cQ+BU9WJxMi+KfGEHmq6ERyYLvhuA7RUb3E5WfDfPfgX1xwHqYy+eTYYL+R9u8Ie4Ex/GbbnnvMB0Z//sXfrJdLpohaDLt+ZOJBvEtDuf66YbD9bfU6CRJR+lOdJWoM5qNWoB1OQfJnJYe43TMq5lzreNvCKzgWKsuO7Hea+dOXFojmphKdkyYhJlXL/gnJcaJiAihmhVotEVsnv5uUzXRmtPBNAog17Bf9rwhlaK/mm4mhQcgqsmMnUehsV/oGkuel2WWioksi8muvxTwcn2QIfdnjdzD70=) 2026-02-15 02:50:09.404632 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBONxCZ5SfkVpnSEV23oCafbZjclvFveWQTLvzQrGzbbrR692ecNHCHu3Lf15/iTCzaw3mlulHrFK79Xu2UXE4qY=) 2026-02-15 02:50:14.509101 | orchestrator | 2026-02-15 02:50:14.509249 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:50:14.509274 | orchestrator | Sunday 15 February 2026 02:50:09 +0000 (0:00:01.191) 0:00:25.741 ******* 2026-02-15 02:50:14.509356 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL7hnkD/lQeektj6UiuONOH5kXTyWYAqt+xLnCgC54rY) 2026-02-15 02:50:14.510145 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDex+uu7Y3pEKqbznweW60yerHb4r7P5Pek2Hwm8Ugjlbwwfvqn6T5sXYQ3A1tYeDD4o7Rspirh/efTb1vfd+ql/VxGBhLwOVlUEskJ2HgaJBZWcErfRB5bbmM4K8Ra7biODegcN7xmE7nllDZU37G/e0TfR/0sVuJ2lY6J4LBtppoOOIQ3Owzu/a5vx7l0r+v5GC5SjXNAc8WHxl7aOPUpslKQcTyrQV8M2cFVh9kC0oxZIfH6Pl8a64zkjlcsh+V+B95F107IffyN/Yh62o9pcjfCFe9i0AilAv8PMET0gnEYdXEcZmxPzGX/8au1C6PLgoGxs1741R44SpAAb7b3QjxVODK4KQvyXDhcfQTFo1vKSEzpQ2IabZlvzgtrEmkXGBO+vkpgULeSB3r9FQd+T+dJJND749aj50AJAUPW3DGuXLGsaNgdwJoehJSYcJdpFXCgDvefv6yzj8N8ekWp/UO8JJgwNI2Ht3giF6/pJ0C+yMkX0cxMHzEmy/u4sz8=) 2026-02-15 02:50:14.510189 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCPkW5ntYNlpiBTBZe4jzQ2V9WYdP6CyxprKUgZjRUglLyKX9b5WjQxILziB/ZotT/9/uPcfH7XPV2HQoh1GKT0=) 2026-02-15 02:50:14.510210 | orchestrator | 2026-02-15 02:50:14.510232 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:50:14.510252 | orchestrator | Sunday 15 February 2026 02:50:10 +0000 (0:00:01.212) 0:00:26.954 ******* 2026-02-15 02:50:14.510272 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBObBHycUNLAxKWQZNe9VjtZDOn6i9IDrbq6Itzfe75fX/X9LINu/4ShJujk6zIgUwHQziUnTwcwQrD+OlfAlt6E=) 2026-02-15 02:50:14.510345 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCO0c5UvQBvwxtfIsoz8Jz36kCF0Q/sMijaYplJ5+znm6JItCWEZbOGeUPcxw4QRYOv5H3E3pfL13ssMPbJ+VYQS9IvxbfF+KCv5h+Mlfq6SF6h+8XDlx2CiW0fb0VbM3x5CUPBNJnPqgrN4QQ5UaYwlk8xzV7jl/fW+2vJZDclxs84uvEqALnHcKZjp2qY37so1XErANqB8JFK4D/V0wuZ9pNSQhngw7BClk4OJNchIlen9z98rl8Mlwj0rYgWLZSwhFF4SU7edETrjmVsNzGF/6SRnSaskIxcVLFJQzjbK8GUwfmCujTKkPhYE8pjQtc8M6idJH3AS9YMIoK5RBInhxA8xnwLnJepX7aXYtrnrU2UCw4f0/9v4/1wj95UjbIGqM+MYw3cjXPfSOoRSTdt5a+okzJxFch73/nXhGrn+LAbDRs9zDIvHILKHztHAPm+iCCo5y0fEWE6Er2QI2EBAqrmzqnCGBdS4MEnWfeNtJL/DzbYmS0D/JRnyR1Ao48=) 2026-02-15 02:50:14.510368 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBizhhgLMpQTKBo138Ez+oL4L4NxgrTmKggrw6ehaa5e) 2026-02-15 02:50:14.510387 | orchestrator | 2026-02-15 02:50:14.510408 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-02-15 02:50:14.510428 | orchestrator | Sunday 15 February 2026 02:50:11 +0000 (0:00:01.221) 0:00:28.175 ******* 2026-02-15 02:50:14.510449 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBTLlmXtYD+Ym5iwFuJ/j17+pEAQ+DM8823J3eunTAUoIe34KkiwENJKt98ribjGXpFOxFc/Z9WhoHxIeTfyNZ0=) 2026-02-15 02:50:14.510493 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiyHE2BpHoPPUfluLlgfisbpHyOIWp6Tbr2GDL/4seGwvDaqPwYNGiKbcTiSv+D7cmaFATk6m7Ue508ymdf36029EcNb+mIqDttxyh7ciT3pHLfyLCyR9+tWHCRw65vt7pUx9smbX7lsDks4Q6KJ5Z7OlkcCZWWrofv8x2iH+TlqqtYWx83A6D/m/bmvT60hOWTJULJMwfu7cl6cy4XqqIW4M47G/LWAKPHUVlLzs/fpe6AbuGSHpGMFry7umKLZdQtge0vqpElX57yuDioJ4vWrMehjOCofOqrhhifxqyARTbwfUvyNibMUnyCwCOk9TLT2IG5VRLKvwdYNHwEHuX5c8a07TgtXeFj/oLpzu3Q1HvXd0UGJhLryjJEq0ET/SqII22QnX6wngwyPw35PsxWe3HB3//NSwhQTv9mANdNGiKz/6N57o56fHzsRrL5zJaaWnYCMeH5cA4m9MEAMJb7vCrROzcON63A5RgNGvAROVWb2obz3DU8Xw7u+U0N78=) 2026-02-15 02:50:14.510514 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHtMXIg0tl8n/dpBftt9w3a6z5p+f4v/k0uMy37gEzd1) 2026-02-15 02:50:14.510532 | orchestrator | 2026-02-15 02:50:14.510550 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-02-15 02:50:14.510599 | orchestrator | Sunday 15 February 2026 02:50:13 +0000 (0:00:01.272) 0:00:29.447 ******* 2026-02-15 02:50:14.510620 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-15 02:50:14.510640 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-15 02:50:14.510659 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-15 02:50:14.510678 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-15 02:50:14.510698 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-15 02:50:14.510718 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-15 02:50:14.510738 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-15 02:50:14.510757 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:50:14.510776 | orchestrator | 2026-02-15 02:50:14.510826 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-02-15 02:50:14.510845 | orchestrator | Sunday 15 February 2026 02:50:13 +0000 (0:00:00.199) 0:00:29.647 ******* 2026-02-15 02:50:14.510866 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:50:14.510885 | orchestrator | 2026-02-15 02:50:14.510906 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-02-15 02:50:14.510926 | orchestrator | Sunday 15 February 2026 02:50:13 +0000 (0:00:00.064) 0:00:29.711 ******* 2026-02-15 02:50:14.510946 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:50:14.510967 | orchestrator | 2026-02-15 02:50:14.510986 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-02-15 02:50:14.511007 | orchestrator | Sunday 15 February 2026 02:50:13 +0000 (0:00:00.069) 0:00:29.781 ******* 2026-02-15 02:50:14.511027 | orchestrator | changed: [testbed-manager] 2026-02-15 02:50:14.511046 | orchestrator | 2026-02-15 02:50:14.511067 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 02:50:14.511087 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 02:50:14.511107 | orchestrator | 2026-02-15 02:50:14.511126 | orchestrator | 2026-02-15 02:50:14.511145 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 02:50:14.511165 | orchestrator | Sunday 15 February 2026 02:50:14 +0000 (0:00:00.816) 0:00:30.598 ******* 2026-02-15 02:50:14.511193 | orchestrator | =============================================================================== 2026-02-15 02:50:14.511213 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.28s 2026-02-15 02:50:14.511232 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.55s 2026-02-15 02:50:14.511252 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.34s 2026-02-15 02:50:14.511272 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.33s 2026-02-15 02:50:14.511316 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.27s 2026-02-15 02:50:14.511337 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2026-02-15 02:50:14.511356 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-02-15 02:50:14.511377 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.22s 2026-02-15 02:50:14.511397 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-02-15 02:50:14.511414 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.20s 2026-02-15 02:50:14.511434 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-15 02:50:14.511454 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-15 02:50:14.511470 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-15 02:50:14.511487 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.19s 2026-02-15 02:50:14.511520 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-15 02:50:14.511540 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2026-02-15 02:50:14.511560 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.82s 2026-02-15 02:50:14.511581 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2026-02-15 02:50:14.511601 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.19s 2026-02-15 02:50:14.511623 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2026-02-15 02:50:14.890226 | orchestrator | + osism apply squid 2026-02-15 02:50:27.359945 | orchestrator | 2026-02-15 02:50:27 | INFO  | Task 773e8cce-3ad4-4b67-a66e-1b2c69372444 (squid) was prepared for execution. 2026-02-15 02:50:27.360035 | orchestrator | 2026-02-15 02:50:27 | INFO  | It takes a moment until task 773e8cce-3ad4-4b67-a66e-1b2c69372444 (squid) has been started and output is visible here. 2026-02-15 02:52:23.427776 | orchestrator | 2026-02-15 02:52:23.427988 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-02-15 02:52:23.428007 | orchestrator | 2026-02-15 02:52:23.428020 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-02-15 02:52:23.428031 | orchestrator | Sunday 15 February 2026 02:50:32 +0000 (0:00:00.194) 0:00:00.194 ******* 2026-02-15 02:52:23.428043 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-02-15 02:52:23.428055 | orchestrator | 2026-02-15 02:52:23.428066 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-02-15 02:52:23.428077 | orchestrator | Sunday 15 February 2026 02:50:32 +0000 (0:00:00.099) 0:00:00.293 ******* 2026-02-15 02:52:23.428088 | orchestrator | ok: [testbed-manager] 2026-02-15 02:52:23.428100 | orchestrator | 2026-02-15 02:52:23.428111 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-02-15 02:52:23.428121 | orchestrator | Sunday 15 February 2026 02:50:33 +0000 (0:00:01.667) 0:00:01.961 ******* 2026-02-15 02:52:23.428133 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-02-15 02:52:23.428144 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-02-15 02:52:23.428154 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-02-15 02:52:23.428165 | orchestrator | 2026-02-15 02:52:23.428176 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-02-15 02:52:23.428186 | orchestrator | Sunday 15 February 2026 02:50:35 +0000 (0:00:01.217) 0:00:03.179 ******* 2026-02-15 02:52:23.428197 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-02-15 02:52:23.428208 | orchestrator | 2026-02-15 02:52:23.428219 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-02-15 02:52:23.428229 | orchestrator | Sunday 15 February 2026 02:50:36 +0000 (0:00:01.204) 0:00:04.383 ******* 2026-02-15 02:52:23.428240 | orchestrator | ok: [testbed-manager] 2026-02-15 02:52:23.428251 | orchestrator | 2026-02-15 02:52:23.428261 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-02-15 02:52:23.428272 | orchestrator | Sunday 15 February 2026 02:50:36 +0000 (0:00:00.371) 0:00:04.754 ******* 2026-02-15 02:52:23.428284 | orchestrator | changed: [testbed-manager] 2026-02-15 02:52:23.428295 | orchestrator | 2026-02-15 02:52:23.428305 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-02-15 02:52:23.428316 | orchestrator | Sunday 15 February 2026 02:50:37 +0000 (0:00:01.054) 0:00:05.809 ******* 2026-02-15 02:52:23.428329 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-02-15 02:52:23.428346 | orchestrator | ok: [testbed-manager] 2026-02-15 02:52:23.428359 | orchestrator | 2026-02-15 02:52:23.428371 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-02-15 02:52:23.428411 | orchestrator | Sunday 15 February 2026 02:51:10 +0000 (0:00:32.400) 0:00:38.210 ******* 2026-02-15 02:52:23.428424 | orchestrator | changed: [testbed-manager] 2026-02-15 02:52:23.428444 | orchestrator | 2026-02-15 02:52:23.428463 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-02-15 02:52:23.428482 | orchestrator | Sunday 15 February 2026 02:51:22 +0000 (0:00:12.253) 0:00:50.463 ******* 2026-02-15 02:52:23.428501 | orchestrator | Pausing for 60 seconds 2026-02-15 02:52:23.428520 | orchestrator | changed: [testbed-manager] 2026-02-15 02:52:23.428538 | orchestrator | 2026-02-15 02:52:23.428558 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-02-15 02:52:23.428577 | orchestrator | Sunday 15 February 2026 02:52:22 +0000 (0:01:00.084) 0:01:50.548 ******* 2026-02-15 02:52:23.428595 | orchestrator | ok: [testbed-manager] 2026-02-15 02:52:23.428614 | orchestrator | 2026-02-15 02:52:23.428635 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-02-15 02:52:23.428653 | orchestrator | Sunday 15 February 2026 02:52:22 +0000 (0:00:00.087) 0:01:50.636 ******* 2026-02-15 02:52:23.428669 | orchestrator | changed: [testbed-manager] 2026-02-15 02:52:23.428680 | orchestrator | 2026-02-15 02:52:23.428691 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 02:52:23.428701 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 02:52:23.428712 | orchestrator | 2026-02-15 02:52:23.428723 | orchestrator | 2026-02-15 02:52:23.428734 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 02:52:23.428744 | orchestrator | Sunday 15 February 2026 02:52:23 +0000 (0:00:00.664) 0:01:51.300 ******* 2026-02-15 02:52:23.428755 | orchestrator | =============================================================================== 2026-02-15 02:52:23.428766 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2026-02-15 02:52:23.428776 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.40s 2026-02-15 02:52:23.428787 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.25s 2026-02-15 02:52:23.428845 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.67s 2026-02-15 02:52:23.428858 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2026-02-15 02:52:23.428869 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.20s 2026-02-15 02:52:23.428880 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 1.05s 2026-02-15 02:52:23.428891 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-02-15 02:52:23.428901 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2026-02-15 02:52:23.428912 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2026-02-15 02:52:23.428922 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.09s 2026-02-15 02:52:23.776795 | orchestrator | + [[ 9.5.0 != \l\a\t\e\s\t ]] 2026-02-15 02:52:23.776907 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-15 02:52:23.838872 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-15 02:52:23.838980 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-15 02:52:23.848326 | orchestrator | + set -e 2026-02-15 02:52:23.848419 | orchestrator | + NAMESPACE=kolla/release 2026-02-15 02:52:23.848438 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-15 02:52:23.855565 | orchestrator | ++ semver 9.5.0 9.0.0 2026-02-15 02:52:23.934530 | orchestrator | + [[ 1 -lt 0 ]] 2026-02-15 02:52:23.934891 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-02-15 02:52:36.173105 | orchestrator | 2026-02-15 02:52:36 | INFO  | Task 0d3959e9-60df-469c-99c1-2eea2b9711ee (operator) was prepared for execution. 2026-02-15 02:52:36.173221 | orchestrator | 2026-02-15 02:52:36 | INFO  | It takes a moment until task 0d3959e9-60df-469c-99c1-2eea2b9711ee (operator) has been started and output is visible here. 2026-02-15 02:52:53.638671 | orchestrator | 2026-02-15 02:52:53.638787 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-02-15 02:52:53.638805 | orchestrator | 2026-02-15 02:52:53.638817 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 02:52:53.638829 | orchestrator | Sunday 15 February 2026 02:52:40 +0000 (0:00:00.160) 0:00:00.160 ******* 2026-02-15 02:52:53.638840 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:52:53.638852 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:52:53.638863 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:52:53.638873 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:52:53.638884 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:52:53.638895 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:52:53.638905 | orchestrator | 2026-02-15 02:52:53.638965 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-02-15 02:52:53.638978 | orchestrator | Sunday 15 February 2026 02:52:45 +0000 (0:00:04.352) 0:00:04.513 ******* 2026-02-15 02:52:53.638998 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:52:53.639015 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:52:53.639031 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:52:53.639048 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:52:53.639066 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:52:53.639083 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:52:53.639100 | orchestrator | 2026-02-15 02:52:53.639120 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-02-15 02:52:53.639139 | orchestrator | 2026-02-15 02:52:53.639157 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-02-15 02:52:53.639174 | orchestrator | Sunday 15 February 2026 02:52:45 +0000 (0:00:00.809) 0:00:05.322 ******* 2026-02-15 02:52:53.639185 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:52:53.639196 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:52:53.639207 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:52:53.639220 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:52:53.639250 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:52:53.639263 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:52:53.639287 | orchestrator | 2026-02-15 02:52:53.639300 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-02-15 02:52:53.639331 | orchestrator | Sunday 15 February 2026 02:52:46 +0000 (0:00:00.199) 0:00:05.522 ******* 2026-02-15 02:52:53.639344 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:52:53.639356 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:52:53.639368 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:52:53.639380 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:52:53.639391 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:52:53.639403 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:52:53.639415 | orchestrator | 2026-02-15 02:52:53.639427 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-02-15 02:52:53.639440 | orchestrator | Sunday 15 February 2026 02:52:46 +0000 (0:00:00.218) 0:00:05.741 ******* 2026-02-15 02:52:53.639453 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:52:53.639467 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:52:53.639480 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:52:53.639492 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:52:53.639504 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:52:53.639516 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:52:53.639528 | orchestrator | 2026-02-15 02:52:53.639541 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-02-15 02:52:53.639553 | orchestrator | Sunday 15 February 2026 02:52:46 +0000 (0:00:00.666) 0:00:06.407 ******* 2026-02-15 02:52:53.639566 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:52:53.639578 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:52:53.639591 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:52:53.639602 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:52:53.639612 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:52:53.639623 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:52:53.639660 | orchestrator | 2026-02-15 02:52:53.639671 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-02-15 02:52:53.639682 | orchestrator | Sunday 15 February 2026 02:52:47 +0000 (0:00:00.789) 0:00:07.197 ******* 2026-02-15 02:52:53.639692 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-02-15 02:52:53.639704 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-02-15 02:52:53.639714 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-02-15 02:52:53.639724 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-02-15 02:52:53.639735 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-02-15 02:52:53.639745 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-02-15 02:52:53.639756 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-02-15 02:52:53.639766 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-02-15 02:52:53.639777 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-02-15 02:52:53.639787 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-02-15 02:52:53.639798 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-02-15 02:52:53.639808 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-02-15 02:52:53.639819 | orchestrator | 2026-02-15 02:52:53.639830 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-02-15 02:52:53.639840 | orchestrator | Sunday 15 February 2026 02:52:48 +0000 (0:00:01.162) 0:00:08.360 ******* 2026-02-15 02:52:53.639851 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:52:53.639861 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:52:53.639872 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:52:53.639882 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:52:53.639893 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:52:53.639903 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:52:53.639914 | orchestrator | 2026-02-15 02:52:53.639987 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-02-15 02:52:53.640000 | orchestrator | Sunday 15 February 2026 02:52:50 +0000 (0:00:01.260) 0:00:09.620 ******* 2026-02-15 02:52:53.640011 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-02-15 02:52:53.640021 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-02-15 02:52:53.640032 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-02-15 02:52:53.640043 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-02-15 02:52:53.640073 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-02-15 02:52:53.640085 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-02-15 02:52:53.640095 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-02-15 02:52:53.640106 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-02-15 02:52:53.640116 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-02-15 02:52:53.640127 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-02-15 02:52:53.640137 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-02-15 02:52:53.640148 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-02-15 02:52:53.640158 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-02-15 02:52:53.640168 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-02-15 02:52:53.640179 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-02-15 02:52:53.640190 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-02-15 02:52:53.640200 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-02-15 02:52:53.640211 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-02-15 02:52:53.640221 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-02-15 02:52:53.640232 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-02-15 02:52:53.640256 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-02-15 02:52:53.640267 | orchestrator | 2026-02-15 02:52:53.640277 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-02-15 02:52:53.640289 | orchestrator | Sunday 15 February 2026 02:52:51 +0000 (0:00:01.187) 0:00:10.808 ******* 2026-02-15 02:52:53.640300 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:52:53.640310 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:52:53.640321 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:52:53.640332 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:52:53.640342 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:52:53.640353 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:52:53.640364 | orchestrator | 2026-02-15 02:52:53.640375 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-02-15 02:52:53.640386 | orchestrator | Sunday 15 February 2026 02:52:51 +0000 (0:00:00.184) 0:00:10.993 ******* 2026-02-15 02:52:53.640396 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:52:53.640407 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:52:53.640417 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:52:53.640428 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:52:53.640438 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:52:53.640449 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:52:53.640460 | orchestrator | 2026-02-15 02:52:53.640471 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-02-15 02:52:53.640481 | orchestrator | Sunday 15 February 2026 02:52:51 +0000 (0:00:00.198) 0:00:11.191 ******* 2026-02-15 02:52:53.640492 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:52:53.640503 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:52:53.640513 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:52:53.640523 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:52:53.640534 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:52:53.640545 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:52:53.640555 | orchestrator | 2026-02-15 02:52:53.640566 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-02-15 02:52:53.640577 | orchestrator | Sunday 15 February 2026 02:52:52 +0000 (0:00:00.629) 0:00:11.821 ******* 2026-02-15 02:52:53.640587 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:52:53.640598 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:52:53.640608 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:52:53.640619 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:52:53.640630 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:52:53.640640 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:52:53.640651 | orchestrator | 2026-02-15 02:52:53.640661 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-02-15 02:52:53.640672 | orchestrator | Sunday 15 February 2026 02:52:52 +0000 (0:00:00.226) 0:00:12.048 ******* 2026-02-15 02:52:53.640683 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-15 02:52:53.640702 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:52:53.640714 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-15 02:52:53.640724 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:52:53.640735 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-15 02:52:53.640746 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:52:53.640756 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-15 02:52:53.640767 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-15 02:52:53.640778 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-15 02:52:53.640788 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:52:53.640799 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:52:53.640809 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:52:53.640819 | orchestrator | 2026-02-15 02:52:53.640830 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-02-15 02:52:53.640841 | orchestrator | Sunday 15 February 2026 02:52:53 +0000 (0:00:00.716) 0:00:12.764 ******* 2026-02-15 02:52:53.640859 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:52:53.640870 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:52:53.640880 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:52:53.640891 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:52:53.640901 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:52:53.640912 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:52:53.640942 | orchestrator | 2026-02-15 02:52:53.640954 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-02-15 02:52:53.640965 | orchestrator | Sunday 15 February 2026 02:52:53 +0000 (0:00:00.174) 0:00:12.939 ******* 2026-02-15 02:52:53.640975 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:52:53.640986 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:52:53.640997 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:52:53.641008 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:52:53.641027 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:52:55.051899 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:52:55.052059 | orchestrator | 2026-02-15 02:52:55.052083 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-02-15 02:52:55.052101 | orchestrator | Sunday 15 February 2026 02:52:53 +0000 (0:00:00.185) 0:00:13.124 ******* 2026-02-15 02:52:55.052115 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:52:55.052130 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:52:55.052145 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:52:55.052161 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:52:55.052176 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:52:55.052190 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:52:55.052203 | orchestrator | 2026-02-15 02:52:55.052217 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-02-15 02:52:55.052230 | orchestrator | Sunday 15 February 2026 02:52:53 +0000 (0:00:00.181) 0:00:13.306 ******* 2026-02-15 02:52:55.052244 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:52:55.052258 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:52:55.052271 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:52:55.052287 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:52:55.052302 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:52:55.052314 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:52:55.052327 | orchestrator | 2026-02-15 02:52:55.052342 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-02-15 02:52:55.052357 | orchestrator | Sunday 15 February 2026 02:52:54 +0000 (0:00:00.660) 0:00:13.967 ******* 2026-02-15 02:52:55.052371 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:52:55.052385 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:52:55.052401 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:52:55.052417 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:52:55.052432 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:52:55.052447 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:52:55.052463 | orchestrator | 2026-02-15 02:52:55.052478 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 02:52:55.052517 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 02:52:55.052535 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 02:52:55.052551 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 02:52:55.052568 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 02:52:55.052584 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 02:52:55.052696 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 02:52:55.052717 | orchestrator | 2026-02-15 02:52:55.052734 | orchestrator | 2026-02-15 02:52:55.052747 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 02:52:55.052762 | orchestrator | Sunday 15 February 2026 02:52:54 +0000 (0:00:00.294) 0:00:14.261 ******* 2026-02-15 02:52:55.052776 | orchestrator | =============================================================================== 2026-02-15 02:52:55.052791 | orchestrator | Gathering Facts --------------------------------------------------------- 4.35s 2026-02-15 02:52:55.052807 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2026-02-15 02:52:55.052821 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.19s 2026-02-15 02:52:55.052839 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2026-02-15 02:52:55.052853 | orchestrator | Do not require tty for all users ---------------------------------------- 0.81s 2026-02-15 02:52:55.052868 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.79s 2026-02-15 02:52:55.052883 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.72s 2026-02-15 02:52:55.052898 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.67s 2026-02-15 02:52:55.052914 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-02-15 02:52:55.052954 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.63s 2026-02-15 02:52:55.052971 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.29s 2026-02-15 02:52:55.052984 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.23s 2026-02-15 02:52:55.052993 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.22s 2026-02-15 02:52:55.053002 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.20s 2026-02-15 02:52:55.053011 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.20s 2026-02-15 02:52:55.053020 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.19s 2026-02-15 02:52:55.053028 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.18s 2026-02-15 02:52:55.053037 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.18s 2026-02-15 02:52:55.053046 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.17s 2026-02-15 02:52:55.431257 | orchestrator | + osism apply --environment custom facts 2026-02-15 02:52:57.539314 | orchestrator | 2026-02-15 02:52:57 | INFO  | Trying to run play facts in environment custom 2026-02-15 02:53:07.638238 | orchestrator | 2026-02-15 02:53:07 | INFO  | Task 684c28f7-c934-48dc-94c2-5492902a07a6 (facts) was prepared for execution. 2026-02-15 02:53:07.638338 | orchestrator | 2026-02-15 02:53:07 | INFO  | It takes a moment until task 684c28f7-c934-48dc-94c2-5492902a07a6 (facts) has been started and output is visible here. 2026-02-15 02:53:50.934632 | orchestrator | 2026-02-15 02:53:50.934770 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-02-15 02:53:50.934784 | orchestrator | 2026-02-15 02:53:50.934794 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-15 02:53:50.934803 | orchestrator | Sunday 15 February 2026 02:53:12 +0000 (0:00:00.105) 0:00:00.105 ******* 2026-02-15 02:53:50.934811 | orchestrator | ok: [testbed-manager] 2026-02-15 02:53:50.934821 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:53:50.934830 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:53:50.934839 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:53:50.934847 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:53:50.934854 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:53:50.934883 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:53:50.934893 | orchestrator | 2026-02-15 02:53:50.934901 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-02-15 02:53:50.934910 | orchestrator | Sunday 15 February 2026 02:53:13 +0000 (0:00:01.502) 0:00:01.608 ******* 2026-02-15 02:53:50.934918 | orchestrator | ok: [testbed-manager] 2026-02-15 02:53:50.934926 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:53:50.934934 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:53:50.934942 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:53:50.934950 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:53:50.934958 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:53:50.934966 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:53:50.934974 | orchestrator | 2026-02-15 02:53:50.934983 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-02-15 02:53:50.934990 | orchestrator | 2026-02-15 02:53:50.934998 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-15 02:53:50.935006 | orchestrator | Sunday 15 February 2026 02:53:15 +0000 (0:00:01.278) 0:00:02.886 ******* 2026-02-15 02:53:50.935015 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:53:50.935023 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:53:50.935031 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:53:50.935038 | orchestrator | 2026-02-15 02:53:50.935046 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-15 02:53:50.935056 | orchestrator | Sunday 15 February 2026 02:53:15 +0000 (0:00:00.118) 0:00:03.004 ******* 2026-02-15 02:53:50.935064 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:53:50.935072 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:53:50.935080 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:53:50.935088 | orchestrator | 2026-02-15 02:53:50.935096 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-15 02:53:50.935104 | orchestrator | Sunday 15 February 2026 02:53:15 +0000 (0:00:00.240) 0:00:03.245 ******* 2026-02-15 02:53:50.935112 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:53:50.935138 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:53:50.935146 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:53:50.935154 | orchestrator | 2026-02-15 02:53:50.935161 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-15 02:53:50.935171 | orchestrator | Sunday 15 February 2026 02:53:15 +0000 (0:00:00.232) 0:00:03.477 ******* 2026-02-15 02:53:50.935181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 02:53:50.935190 | orchestrator | 2026-02-15 02:53:50.935198 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-15 02:53:50.935206 | orchestrator | Sunday 15 February 2026 02:53:15 +0000 (0:00:00.168) 0:00:03.646 ******* 2026-02-15 02:53:50.935214 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:53:50.935221 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:53:50.935229 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:53:50.935238 | orchestrator | 2026-02-15 02:53:50.935246 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-15 02:53:50.935254 | orchestrator | Sunday 15 February 2026 02:53:16 +0000 (0:00:00.477) 0:00:04.123 ******* 2026-02-15 02:53:50.935262 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:53:50.935270 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:53:50.935277 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:53:50.935285 | orchestrator | 2026-02-15 02:53:50.935293 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-15 02:53:50.935301 | orchestrator | Sunday 15 February 2026 02:53:16 +0000 (0:00:00.148) 0:00:04.272 ******* 2026-02-15 02:53:50.935309 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:53:50.935317 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:53:50.935325 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:53:50.935332 | orchestrator | 2026-02-15 02:53:50.935340 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-15 02:53:50.935357 | orchestrator | Sunday 15 February 2026 02:53:17 +0000 (0:00:01.108) 0:00:05.380 ******* 2026-02-15 02:53:50.935365 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:53:50.935373 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:53:50.935380 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:53:50.935388 | orchestrator | 2026-02-15 02:53:50.935396 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-15 02:53:50.935404 | orchestrator | Sunday 15 February 2026 02:53:18 +0000 (0:00:00.520) 0:00:05.901 ******* 2026-02-15 02:53:50.935412 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:53:50.935417 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:53:50.935422 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:53:50.935427 | orchestrator | 2026-02-15 02:53:50.935432 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-15 02:53:50.935472 | orchestrator | Sunday 15 February 2026 02:53:19 +0000 (0:00:01.056) 0:00:06.957 ******* 2026-02-15 02:53:50.935482 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:53:50.935490 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:53:50.935498 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:53:50.935506 | orchestrator | 2026-02-15 02:53:50.935515 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-02-15 02:53:50.935521 | orchestrator | Sunday 15 February 2026 02:53:34 +0000 (0:00:15.325) 0:00:22.283 ******* 2026-02-15 02:53:50.935526 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:53:50.935530 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:53:50.935535 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:53:50.935540 | orchestrator | 2026-02-15 02:53:50.935548 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-02-15 02:53:50.935575 | orchestrator | Sunday 15 February 2026 02:53:34 +0000 (0:00:00.116) 0:00:22.399 ******* 2026-02-15 02:53:50.935584 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:53:50.935592 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:53:50.935600 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:53:50.935609 | orchestrator | 2026-02-15 02:53:50.935614 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-02-15 02:53:50.935620 | orchestrator | Sunday 15 February 2026 02:53:41 +0000 (0:00:07.341) 0:00:29.741 ******* 2026-02-15 02:53:50.935628 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:53:50.935636 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:53:50.935644 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:53:50.935651 | orchestrator | 2026-02-15 02:53:50.935659 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-02-15 02:53:50.935667 | orchestrator | Sunday 15 February 2026 02:53:42 +0000 (0:00:00.446) 0:00:30.188 ******* 2026-02-15 02:53:50.935675 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-02-15 02:53:50.935683 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-02-15 02:53:50.935691 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-02-15 02:53:50.935699 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-02-15 02:53:50.935710 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-02-15 02:53:50.935718 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-02-15 02:53:50.935726 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-02-15 02:53:50.935734 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-02-15 02:53:50.935741 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-02-15 02:53:50.935749 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-02-15 02:53:50.935757 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-02-15 02:53:50.935765 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-02-15 02:53:50.935770 | orchestrator | 2026-02-15 02:53:50.935774 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-15 02:53:50.935789 | orchestrator | Sunday 15 February 2026 02:53:45 +0000 (0:00:03.544) 0:00:33.733 ******* 2026-02-15 02:53:50.935797 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:53:50.935805 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:53:50.935812 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:53:50.935819 | orchestrator | 2026-02-15 02:53:50.935827 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-15 02:53:50.935835 | orchestrator | 2026-02-15 02:53:50.935842 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-15 02:53:50.935849 | orchestrator | Sunday 15 February 2026 02:53:47 +0000 (0:00:01.422) 0:00:35.155 ******* 2026-02-15 02:53:50.935856 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:53:50.935864 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:53:50.935871 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:53:50.935879 | orchestrator | ok: [testbed-manager] 2026-02-15 02:53:50.935886 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:53:50.935894 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:53:50.935901 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:53:50.935909 | orchestrator | 2026-02-15 02:53:50.935916 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 02:53:50.935925 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 02:53:50.935934 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 02:53:50.935943 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 02:53:50.935952 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 02:53:50.935957 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 02:53:50.935962 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 02:53:50.935967 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 02:53:50.935972 | orchestrator | 2026-02-15 02:53:50.935980 | orchestrator | 2026-02-15 02:53:50.935987 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 02:53:50.935995 | orchestrator | Sunday 15 February 2026 02:53:50 +0000 (0:00:03.624) 0:00:38.780 ******* 2026-02-15 02:53:50.936002 | orchestrator | =============================================================================== 2026-02-15 02:53:50.936010 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.33s 2026-02-15 02:53:50.936018 | orchestrator | Install required packages (Debian) -------------------------------------- 7.34s 2026-02-15 02:53:50.936025 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.62s 2026-02-15 02:53:50.936032 | orchestrator | Copy fact files --------------------------------------------------------- 3.54s 2026-02-15 02:53:50.936040 | orchestrator | Create custom facts directory ------------------------------------------- 1.50s 2026-02-15 02:53:50.936047 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.42s 2026-02-15 02:53:50.936060 | orchestrator | Copy fact file ---------------------------------------------------------- 1.28s 2026-02-15 02:53:51.213442 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.11s 2026-02-15 02:53:51.213555 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.06s 2026-02-15 02:53:51.213570 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.52s 2026-02-15 02:53:51.213611 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.48s 2026-02-15 02:53:51.213622 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-02-15 02:53:51.213633 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.24s 2026-02-15 02:53:51.213644 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.23s 2026-02-15 02:53:51.213655 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.17s 2026-02-15 02:53:51.213667 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.15s 2026-02-15 02:53:51.213678 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2026-02-15 02:53:51.213705 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2026-02-15 02:53:51.584904 | orchestrator | + osism apply bootstrap 2026-02-15 02:54:03.850776 | orchestrator | 2026-02-15 02:54:03 | INFO  | Task a9db6445-368a-477f-9bff-6b415d0e5974 (bootstrap) was prepared for execution. 2026-02-15 02:54:03.850875 | orchestrator | 2026-02-15 02:54:03 | INFO  | It takes a moment until task a9db6445-368a-477f-9bff-6b415d0e5974 (bootstrap) has been started and output is visible here. 2026-02-15 02:54:21.105820 | orchestrator | 2026-02-15 02:54:21.105922 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-02-15 02:54:21.105934 | orchestrator | 2026-02-15 02:54:21.105943 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-02-15 02:54:21.105952 | orchestrator | Sunday 15 February 2026 02:54:08 +0000 (0:00:00.157) 0:00:00.157 ******* 2026-02-15 02:54:21.105961 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:21.105969 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:21.105978 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:21.105986 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:21.105993 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:21.106001 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:21.106009 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:21.106060 | orchestrator | 2026-02-15 02:54:21.106071 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-15 02:54:21.106079 | orchestrator | 2026-02-15 02:54:21.106087 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-15 02:54:21.106095 | orchestrator | Sunday 15 February 2026 02:54:08 +0000 (0:00:00.287) 0:00:00.445 ******* 2026-02-15 02:54:21.106103 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:21.106111 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:21.106118 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:21.106126 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:21.106134 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:21.106142 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:21.106150 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:21.106157 | orchestrator | 2026-02-15 02:54:21.106165 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-02-15 02:54:21.106173 | orchestrator | 2026-02-15 02:54:21.106181 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-15 02:54:21.106189 | orchestrator | Sunday 15 February 2026 02:54:12 +0000 (0:00:03.490) 0:00:03.936 ******* 2026-02-15 02:54:21.106198 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-02-15 02:54:21.106207 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-02-15 02:54:21.106214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-02-15 02:54:21.106277 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-02-15 02:54:21.106287 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-02-15 02:54:21.106295 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 02:54:21.106303 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-02-15 02:54:21.106311 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-02-15 02:54:21.106319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 02:54:21.106346 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-15 02:54:21.106355 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 02:54:21.106363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 02:54:21.106371 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-15 02:54:21.106379 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 02:54:21.106388 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 02:54:21.106397 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 02:54:21.106407 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-15 02:54:21.106416 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 02:54:21.106425 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-02-15 02:54:21.106434 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 02:54:21.106443 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 02:54:21.106452 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 02:54:21.106461 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:54:21.106471 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:54:21.106480 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-02-15 02:54:21.106489 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 02:54:21.106498 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 02:54:21.106507 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-15 02:54:21.106516 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-15 02:54:21.106525 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-15 02:54:21.106534 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-15 02:54:21.106543 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-15 02:54:21.106552 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-15 02:54:21.106561 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-15 02:54:21.106570 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-02-15 02:54:21.106579 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 02:54:21.106588 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-15 02:54:21.106597 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:54:21.106606 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-15 02:54:21.106616 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-15 02:54:21.106626 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:54:21.106635 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-15 02:54:21.106644 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 02:54:21.106655 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 02:54:21.106669 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-15 02:54:21.106689 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 02:54:21.106725 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 02:54:21.106739 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 02:54:21.106753 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:54:21.106765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 02:54:21.106779 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:54:21.106792 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-15 02:54:21.106805 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 02:54:21.106820 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 02:54:21.106846 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 02:54:21.106879 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:54:21.106893 | orchestrator | 2026-02-15 02:54:21.106906 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-02-15 02:54:21.106919 | orchestrator | 2026-02-15 02:54:21.106932 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-02-15 02:54:21.106945 | orchestrator | Sunday 15 February 2026 02:54:13 +0000 (0:00:00.685) 0:00:04.621 ******* 2026-02-15 02:54:21.106958 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:21.106966 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:21.106974 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:21.106982 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:21.106990 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:21.106997 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:21.107005 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:21.107013 | orchestrator | 2026-02-15 02:54:21.107021 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-02-15 02:54:21.107029 | orchestrator | Sunday 15 February 2026 02:54:14 +0000 (0:00:01.324) 0:00:05.946 ******* 2026-02-15 02:54:21.107036 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:21.107044 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:21.107052 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:21.107059 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:21.107067 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:21.107074 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:21.107082 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:21.107090 | orchestrator | 2026-02-15 02:54:21.107098 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-02-15 02:54:21.107106 | orchestrator | Sunday 15 February 2026 02:54:15 +0000 (0:00:01.383) 0:00:07.329 ******* 2026-02-15 02:54:21.107115 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:54:21.107124 | orchestrator | 2026-02-15 02:54:21.107132 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-02-15 02:54:21.107140 | orchestrator | Sunday 15 February 2026 02:54:16 +0000 (0:00:00.347) 0:00:07.676 ******* 2026-02-15 02:54:21.107148 | orchestrator | changed: [testbed-manager] 2026-02-15 02:54:21.107156 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:54:21.107163 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:54:21.107171 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:54:21.107179 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:54:21.107186 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:54:21.107194 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:54:21.107201 | orchestrator | 2026-02-15 02:54:21.107209 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-02-15 02:54:21.107217 | orchestrator | Sunday 15 February 2026 02:54:18 +0000 (0:00:02.447) 0:00:10.123 ******* 2026-02-15 02:54:21.107278 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:54:21.107288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:54:21.107297 | orchestrator | 2026-02-15 02:54:21.107305 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-02-15 02:54:21.107313 | orchestrator | Sunday 15 February 2026 02:54:18 +0000 (0:00:00.340) 0:00:10.464 ******* 2026-02-15 02:54:21.107321 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:54:21.107328 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:54:21.107336 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:54:21.107344 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:54:21.107351 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:54:21.107359 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:54:21.107374 | orchestrator | 2026-02-15 02:54:21.107382 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-02-15 02:54:21.107390 | orchestrator | Sunday 15 February 2026 02:54:19 +0000 (0:00:00.993) 0:00:11.458 ******* 2026-02-15 02:54:21.107397 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:54:21.107405 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:54:21.107413 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:54:21.107420 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:54:21.107428 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:54:21.107436 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:54:21.107443 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:54:21.107451 | orchestrator | 2026-02-15 02:54:21.107459 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-02-15 02:54:21.107466 | orchestrator | Sunday 15 February 2026 02:54:20 +0000 (0:00:00.597) 0:00:12.055 ******* 2026-02-15 02:54:21.107474 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:54:21.107482 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:54:21.107489 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:54:21.107502 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:54:21.107509 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:54:21.107517 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:54:21.107525 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:21.107533 | orchestrator | 2026-02-15 02:54:21.107541 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-02-15 02:54:21.107550 | orchestrator | Sunday 15 February 2026 02:54:20 +0000 (0:00:00.468) 0:00:12.523 ******* 2026-02-15 02:54:21.107558 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:54:21.107566 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:54:21.107582 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:54:33.332615 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:54:33.332758 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:54:33.332784 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:54:33.332803 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:54:33.332821 | orchestrator | 2026-02-15 02:54:33.332842 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-02-15 02:54:33.332863 | orchestrator | Sunday 15 February 2026 02:54:21 +0000 (0:00:00.267) 0:00:12.791 ******* 2026-02-15 02:54:33.332885 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:54:33.332925 | orchestrator | 2026-02-15 02:54:33.332943 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-02-15 02:54:33.332964 | orchestrator | Sunday 15 February 2026 02:54:21 +0000 (0:00:00.392) 0:00:13.183 ******* 2026-02-15 02:54:33.332982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:54:33.333001 | orchestrator | 2026-02-15 02:54:33.333020 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-02-15 02:54:33.333038 | orchestrator | Sunday 15 February 2026 02:54:21 +0000 (0:00:00.359) 0:00:13.543 ******* 2026-02-15 02:54:33.333056 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:33.333076 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:33.333095 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:33.333114 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:33.333135 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:33.333154 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:33.333173 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.333192 | orchestrator | 2026-02-15 02:54:33.333211 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-02-15 02:54:33.333228 | orchestrator | Sunday 15 February 2026 02:54:23 +0000 (0:00:01.567) 0:00:15.111 ******* 2026-02-15 02:54:33.333311 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:54:33.333334 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:54:33.333353 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:54:33.333373 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:54:33.333392 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:54:33.333410 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:54:33.333428 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:54:33.333447 | orchestrator | 2026-02-15 02:54:33.333467 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-02-15 02:54:33.333487 | orchestrator | Sunday 15 February 2026 02:54:23 +0000 (0:00:00.347) 0:00:15.458 ******* 2026-02-15 02:54:33.333507 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:33.333527 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:33.333547 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.333566 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:33.333586 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:33.333604 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:33.333621 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:33.333639 | orchestrator | 2026-02-15 02:54:33.333659 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-02-15 02:54:33.333677 | orchestrator | Sunday 15 February 2026 02:54:24 +0000 (0:00:00.521) 0:00:15.980 ******* 2026-02-15 02:54:33.333695 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:54:33.333714 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:54:33.333732 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:54:33.333752 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:54:33.333770 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:54:33.333789 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:54:33.333808 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:54:33.333829 | orchestrator | 2026-02-15 02:54:33.333849 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-02-15 02:54:33.333872 | orchestrator | Sunday 15 February 2026 02:54:24 +0000 (0:00:00.317) 0:00:16.298 ******* 2026-02-15 02:54:33.333892 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:54:33.333911 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.333929 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:54:33.333949 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:54:33.333968 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:54:33.333987 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:54:33.334005 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:54:33.334115 | orchestrator | 2026-02-15 02:54:33.334137 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-02-15 02:54:33.334157 | orchestrator | Sunday 15 February 2026 02:54:25 +0000 (0:00:00.537) 0:00:16.835 ******* 2026-02-15 02:54:33.334177 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.334195 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:54:33.334215 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:54:33.334233 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:54:33.334251 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:54:33.334302 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:54:33.334321 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:54:33.334339 | orchestrator | 2026-02-15 02:54:33.334359 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-02-15 02:54:33.334377 | orchestrator | Sunday 15 February 2026 02:54:26 +0000 (0:00:01.028) 0:00:17.863 ******* 2026-02-15 02:54:33.334393 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:33.334428 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:33.334449 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:33.334467 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:33.334484 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:33.334501 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:33.334519 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.334536 | orchestrator | 2026-02-15 02:54:33.334555 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-02-15 02:54:33.334593 | orchestrator | Sunday 15 February 2026 02:54:27 +0000 (0:00:01.031) 0:00:18.895 ******* 2026-02-15 02:54:33.334644 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:54:33.334667 | orchestrator | 2026-02-15 02:54:33.334686 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-02-15 02:54:33.334706 | orchestrator | Sunday 15 February 2026 02:54:27 +0000 (0:00:00.373) 0:00:19.268 ******* 2026-02-15 02:54:33.334725 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:54:33.334744 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:54:33.334762 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:54:33.334778 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:54:33.334793 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:54:33.334811 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:54:33.334826 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:54:33.334842 | orchestrator | 2026-02-15 02:54:33.334857 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-02-15 02:54:33.334874 | orchestrator | Sunday 15 February 2026 02:54:28 +0000 (0:00:01.222) 0:00:20.491 ******* 2026-02-15 02:54:33.334891 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.334909 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:33.334925 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:33.334941 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:33.334958 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:33.334974 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:33.334991 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:33.335009 | orchestrator | 2026-02-15 02:54:33.335027 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-02-15 02:54:33.335046 | orchestrator | Sunday 15 February 2026 02:54:29 +0000 (0:00:00.270) 0:00:20.762 ******* 2026-02-15 02:54:33.335063 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.335103 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:33.335137 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:33.335156 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:33.335174 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:33.335191 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:33.335209 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:33.335228 | orchestrator | 2026-02-15 02:54:33.335246 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-02-15 02:54:33.335291 | orchestrator | Sunday 15 February 2026 02:54:29 +0000 (0:00:00.269) 0:00:21.032 ******* 2026-02-15 02:54:33.335310 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.335328 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:33.335347 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:33.335366 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:33.335385 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:33.335404 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:33.335423 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:33.335442 | orchestrator | 2026-02-15 02:54:33.335461 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-02-15 02:54:33.335480 | orchestrator | Sunday 15 February 2026 02:54:29 +0000 (0:00:00.240) 0:00:21.272 ******* 2026-02-15 02:54:33.335501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:54:33.335521 | orchestrator | 2026-02-15 02:54:33.335541 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-02-15 02:54:33.335560 | orchestrator | Sunday 15 February 2026 02:54:29 +0000 (0:00:00.303) 0:00:21.576 ******* 2026-02-15 02:54:33.335580 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:33.335600 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:33.335637 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.335655 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:33.335673 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:33.335692 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:33.335711 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:33.335730 | orchestrator | 2026-02-15 02:54:33.335749 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-02-15 02:54:33.335769 | orchestrator | Sunday 15 February 2026 02:54:30 +0000 (0:00:00.517) 0:00:22.094 ******* 2026-02-15 02:54:33.335788 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:54:33.335808 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:54:33.335827 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:54:33.335846 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:54:33.335865 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:54:33.335882 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:54:33.335901 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:54:33.335919 | orchestrator | 2026-02-15 02:54:33.335938 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-02-15 02:54:33.335957 | orchestrator | Sunday 15 February 2026 02:54:30 +0000 (0:00:00.264) 0:00:22.358 ******* 2026-02-15 02:54:33.335976 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:33.335996 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:33.336016 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:33.336034 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.336051 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:54:33.336071 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:54:33.336088 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:54:33.336104 | orchestrator | 2026-02-15 02:54:33.336121 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-02-15 02:54:33.336139 | orchestrator | Sunday 15 February 2026 02:54:31 +0000 (0:00:00.987) 0:00:23.345 ******* 2026-02-15 02:54:33.336157 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:33.336177 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.336197 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:33.336217 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:33.336236 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:54:33.336255 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:54:33.336315 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:54:33.336333 | orchestrator | 2026-02-15 02:54:33.336352 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-02-15 02:54:33.336371 | orchestrator | Sunday 15 February 2026 02:54:32 +0000 (0:00:00.531) 0:00:23.876 ******* 2026-02-15 02:54:33.336390 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:54:33.336409 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:54:33.336428 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:54:33.336466 | orchestrator | ok: [testbed-manager] 2026-02-15 02:54:33.336507 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:55:14.547822 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:55:14.547897 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:55:14.547903 | orchestrator | 2026-02-15 02:55:14.547908 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-02-15 02:55:14.547913 | orchestrator | Sunday 15 February 2026 02:54:33 +0000 (0:00:01.024) 0:00:24.901 ******* 2026-02-15 02:55:14.547918 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.547922 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.547926 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.547930 | orchestrator | changed: [testbed-manager] 2026-02-15 02:55:14.547935 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:55:14.547938 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:55:14.547942 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:55:14.547946 | orchestrator | 2026-02-15 02:55:14.547950 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-02-15 02:55:14.547954 | orchestrator | Sunday 15 February 2026 02:54:48 +0000 (0:00:15.117) 0:00:40.018 ******* 2026-02-15 02:55:14.547958 | orchestrator | ok: [testbed-manager] 2026-02-15 02:55:14.547977 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.547981 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.547984 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.547988 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:55:14.547992 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:55:14.547995 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:55:14.547999 | orchestrator | 2026-02-15 02:55:14.548003 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-02-15 02:55:14.548007 | orchestrator | Sunday 15 February 2026 02:54:48 +0000 (0:00:00.301) 0:00:40.320 ******* 2026-02-15 02:55:14.548011 | orchestrator | ok: [testbed-manager] 2026-02-15 02:55:14.548014 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.548018 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.548022 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.548026 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:55:14.548029 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:55:14.548033 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:55:14.548037 | orchestrator | 2026-02-15 02:55:14.548040 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-02-15 02:55:14.548044 | orchestrator | Sunday 15 February 2026 02:54:49 +0000 (0:00:00.274) 0:00:40.594 ******* 2026-02-15 02:55:14.548048 | orchestrator | ok: [testbed-manager] 2026-02-15 02:55:14.548052 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.548055 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.548059 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.548062 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:55:14.548066 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:55:14.548070 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:55:14.548074 | orchestrator | 2026-02-15 02:55:14.548078 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-02-15 02:55:14.548081 | orchestrator | Sunday 15 February 2026 02:54:49 +0000 (0:00:00.269) 0:00:40.864 ******* 2026-02-15 02:55:14.548087 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:55:14.548093 | orchestrator | 2026-02-15 02:55:14.548096 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-02-15 02:55:14.548100 | orchestrator | Sunday 15 February 2026 02:54:49 +0000 (0:00:00.331) 0:00:41.196 ******* 2026-02-15 02:55:14.548104 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.548108 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:55:14.548111 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:55:14.548115 | orchestrator | ok: [testbed-manager] 2026-02-15 02:55:14.548118 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.548122 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.548126 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:55:14.548129 | orchestrator | 2026-02-15 02:55:14.548133 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-02-15 02:55:14.548137 | orchestrator | Sunday 15 February 2026 02:54:51 +0000 (0:00:01.575) 0:00:42.771 ******* 2026-02-15 02:55:14.548140 | orchestrator | changed: [testbed-manager] 2026-02-15 02:55:14.548144 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:55:14.548148 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:55:14.548151 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:55:14.548155 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:55:14.548159 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:55:14.548162 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:55:14.548166 | orchestrator | 2026-02-15 02:55:14.548170 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-02-15 02:55:14.548174 | orchestrator | Sunday 15 February 2026 02:54:52 +0000 (0:00:01.067) 0:00:43.839 ******* 2026-02-15 02:55:14.548177 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.548181 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.548184 | orchestrator | ok: [testbed-manager] 2026-02-15 02:55:14.548192 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.548195 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:55:14.548199 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:55:14.548203 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:55:14.548206 | orchestrator | 2026-02-15 02:55:14.548210 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-02-15 02:55:14.548214 | orchestrator | Sunday 15 February 2026 02:54:53 +0000 (0:00:00.789) 0:00:44.629 ******* 2026-02-15 02:55:14.548218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:55:14.548223 | orchestrator | 2026-02-15 02:55:14.548236 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-02-15 02:55:14.548241 | orchestrator | Sunday 15 February 2026 02:54:53 +0000 (0:00:00.356) 0:00:44.985 ******* 2026-02-15 02:55:14.548245 | orchestrator | changed: [testbed-manager] 2026-02-15 02:55:14.548248 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:55:14.548252 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:55:14.548256 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:55:14.548259 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:55:14.548263 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:55:14.548266 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:55:14.548270 | orchestrator | 2026-02-15 02:55:14.548283 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-02-15 02:55:14.548287 | orchestrator | Sunday 15 February 2026 02:54:54 +0000 (0:00:01.021) 0:00:46.006 ******* 2026-02-15 02:55:14.548291 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:55:14.548294 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:55:14.548298 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:55:14.548302 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:55:14.548305 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:55:14.548309 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:55:14.548313 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:55:14.548316 | orchestrator | 2026-02-15 02:55:14.548320 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-02-15 02:55:14.548324 | orchestrator | Sunday 15 February 2026 02:54:54 +0000 (0:00:00.272) 0:00:46.279 ******* 2026-02-15 02:55:14.548328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:55:14.548331 | orchestrator | 2026-02-15 02:55:14.548335 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-02-15 02:55:14.548339 | orchestrator | Sunday 15 February 2026 02:54:55 +0000 (0:00:00.362) 0:00:46.641 ******* 2026-02-15 02:55:14.548342 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.548346 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.548350 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:55:14.548353 | orchestrator | ok: [testbed-manager] 2026-02-15 02:55:14.548357 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:55:14.548361 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.548364 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:55:14.548368 | orchestrator | 2026-02-15 02:55:14.548371 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-02-15 02:55:14.548375 | orchestrator | Sunday 15 February 2026 02:54:56 +0000 (0:00:01.510) 0:00:48.152 ******* 2026-02-15 02:55:14.548379 | orchestrator | changed: [testbed-manager] 2026-02-15 02:55:14.548382 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:55:14.548386 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:55:14.548390 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:55:14.548393 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:55:14.548426 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:55:14.548431 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:55:14.548439 | orchestrator | 2026-02-15 02:55:14.548444 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-02-15 02:55:14.548448 | orchestrator | Sunday 15 February 2026 02:54:57 +0000 (0:00:01.111) 0:00:49.263 ******* 2026-02-15 02:55:14.548452 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:55:14.548456 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:55:14.548461 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:55:14.548465 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:55:14.548469 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:55:14.548473 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:55:14.548478 | orchestrator | changed: [testbed-manager] 2026-02-15 02:55:14.548482 | orchestrator | 2026-02-15 02:55:14.548488 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-02-15 02:55:14.548494 | orchestrator | Sunday 15 February 2026 02:55:11 +0000 (0:00:13.663) 0:01:02.927 ******* 2026-02-15 02:55:14.548501 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:55:14.548506 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.548512 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:55:14.548518 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.548524 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.548530 | orchestrator | ok: [testbed-manager] 2026-02-15 02:55:14.548535 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:55:14.548541 | orchestrator | 2026-02-15 02:55:14.548546 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-02-15 02:55:14.548552 | orchestrator | Sunday 15 February 2026 02:55:12 +0000 (0:00:01.365) 0:01:04.293 ******* 2026-02-15 02:55:14.548557 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.548563 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.548569 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.548575 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:55:14.548581 | orchestrator | ok: [testbed-manager] 2026-02-15 02:55:14.548587 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:55:14.548593 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:55:14.548599 | orchestrator | 2026-02-15 02:55:14.548606 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-02-15 02:55:14.548612 | orchestrator | Sunday 15 February 2026 02:55:13 +0000 (0:00:00.914) 0:01:05.208 ******* 2026-02-15 02:55:14.548618 | orchestrator | ok: [testbed-manager] 2026-02-15 02:55:14.548624 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.548698 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.548709 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.548715 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:55:14.548720 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:55:14.548726 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:55:14.548732 | orchestrator | 2026-02-15 02:55:14.548738 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-02-15 02:55:14.548744 | orchestrator | Sunday 15 February 2026 02:55:13 +0000 (0:00:00.298) 0:01:05.506 ******* 2026-02-15 02:55:14.548750 | orchestrator | ok: [testbed-manager] 2026-02-15 02:55:14.548756 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:55:14.548762 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:55:14.548767 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:55:14.548773 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:55:14.548778 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:55:14.548784 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:55:14.548790 | orchestrator | 2026-02-15 02:55:14.548802 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-02-15 02:55:14.548808 | orchestrator | Sunday 15 February 2026 02:55:14 +0000 (0:00:00.282) 0:01:05.788 ******* 2026-02-15 02:55:14.548815 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:55:14.548823 | orchestrator | 2026-02-15 02:55:14.548837 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-02-15 02:57:38.202533 | orchestrator | Sunday 15 February 2026 02:55:14 +0000 (0:00:00.331) 0:01:06.120 ******* 2026-02-15 02:57:38.202632 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:38.202644 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:38.202652 | orchestrator | ok: [testbed-manager] 2026-02-15 02:57:38.202659 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:38.202667 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:38.202674 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:38.202681 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:38.202688 | orchestrator | 2026-02-15 02:57:38.202695 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-02-15 02:57:38.202702 | orchestrator | Sunday 15 February 2026 02:55:16 +0000 (0:00:01.615) 0:01:07.736 ******* 2026-02-15 02:57:38.202709 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:57:38.202717 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:57:38.202724 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:57:38.202732 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:57:38.202738 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:57:38.202744 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:57:38.202750 | orchestrator | changed: [testbed-manager] 2026-02-15 02:57:38.202809 | orchestrator | 2026-02-15 02:57:38.202818 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-02-15 02:57:38.202825 | orchestrator | Sunday 15 February 2026 02:55:16 +0000 (0:00:00.652) 0:01:08.389 ******* 2026-02-15 02:57:38.202831 | orchestrator | ok: [testbed-manager] 2026-02-15 02:57:38.202837 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:38.202844 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:38.202851 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:38.202922 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:38.202928 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:38.202934 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:38.202941 | orchestrator | 2026-02-15 02:57:38.202949 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-02-15 02:57:38.202956 | orchestrator | Sunday 15 February 2026 02:55:17 +0000 (0:00:00.282) 0:01:08.671 ******* 2026-02-15 02:57:38.202963 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:38.202970 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:38.202975 | orchestrator | ok: [testbed-manager] 2026-02-15 02:57:38.202982 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:38.202989 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:38.202996 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:38.203002 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:38.203009 | orchestrator | 2026-02-15 02:57:38.203016 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-02-15 02:57:38.203024 | orchestrator | Sunday 15 February 2026 02:55:18 +0000 (0:00:01.092) 0:01:09.764 ******* 2026-02-15 02:57:38.203031 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:57:38.203037 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:57:38.203045 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:57:38.203052 | orchestrator | changed: [testbed-manager] 2026-02-15 02:57:38.203059 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:57:38.203066 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:57:38.203074 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:57:38.203081 | orchestrator | 2026-02-15 02:57:38.203093 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-02-15 02:57:38.203101 | orchestrator | Sunday 15 February 2026 02:55:19 +0000 (0:00:01.601) 0:01:11.365 ******* 2026-02-15 02:57:38.203108 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:38.203116 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:38.203123 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:38.203130 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:38.203137 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:38.203144 | orchestrator | ok: [testbed-manager] 2026-02-15 02:57:38.203152 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:38.203158 | orchestrator | 2026-02-15 02:57:38.203165 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-02-15 02:57:38.203195 | orchestrator | Sunday 15 February 2026 02:55:22 +0000 (0:00:02.256) 0:01:13.622 ******* 2026-02-15 02:57:38.203203 | orchestrator | ok: [testbed-manager] 2026-02-15 02:57:38.203211 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:38.203219 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:38.203225 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:38.203233 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:38.203241 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:38.203247 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:38.203254 | orchestrator | 2026-02-15 02:57:38.203262 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-02-15 02:57:38.203269 | orchestrator | Sunday 15 February 2026 02:55:59 +0000 (0:00:37.853) 0:01:51.476 ******* 2026-02-15 02:57:38.203276 | orchestrator | changed: [testbed-manager] 2026-02-15 02:57:38.203284 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:57:38.203291 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:57:38.203297 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:57:38.203304 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:57:38.203311 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:57:38.203317 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:57:38.203324 | orchestrator | 2026-02-15 02:57:38.203333 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-02-15 02:57:38.203340 | orchestrator | Sunday 15 February 2026 02:57:20 +0000 (0:01:20.707) 0:03:12.184 ******* 2026-02-15 02:57:38.203348 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:38.203355 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:38.203362 | orchestrator | ok: [testbed-manager] 2026-02-15 02:57:38.203368 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:38.203375 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:38.203383 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:38.203390 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:38.203397 | orchestrator | 2026-02-15 02:57:38.203405 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-02-15 02:57:38.203413 | orchestrator | Sunday 15 February 2026 02:57:22 +0000 (0:00:01.825) 0:03:14.009 ******* 2026-02-15 02:57:38.203420 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:38.203427 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:38.203434 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:38.203441 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:38.203447 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:38.203454 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:38.203460 | orchestrator | changed: [testbed-manager] 2026-02-15 02:57:38.203465 | orchestrator | 2026-02-15 02:57:38.203471 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-02-15 02:57:38.203477 | orchestrator | Sunday 15 February 2026 02:57:36 +0000 (0:00:14.425) 0:03:28.434 ******* 2026-02-15 02:57:38.203517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-02-15 02:57:38.203541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-02-15 02:57:38.203559 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-02-15 02:57:38.203567 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-15 02:57:38.203575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-02-15 02:57:38.203582 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-02-15 02:57:38.203589 | orchestrator | 2026-02-15 02:57:38.203595 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-02-15 02:57:38.203602 | orchestrator | Sunday 15 February 2026 02:57:37 +0000 (0:00:00.457) 0:03:28.892 ******* 2026-02-15 02:57:38.203608 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-15 02:57:38.203615 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-15 02:57:38.203621 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:57:38.203628 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:57:38.203634 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-15 02:57:38.203641 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-02-15 02:57:38.203646 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:57:38.203653 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:57:38.203658 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-15 02:57:38.203665 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-15 02:57:38.203671 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-15 02:57:38.203677 | orchestrator | 2026-02-15 02:57:38.203684 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-02-15 02:57:38.203691 | orchestrator | Sunday 15 February 2026 02:57:38 +0000 (0:00:00.804) 0:03:29.697 ******* 2026-02-15 02:57:38.203700 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-15 02:57:38.203709 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-15 02:57:38.203716 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-15 02:57:38.203722 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-15 02:57:38.203728 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-15 02:57:38.203740 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-15 02:57:42.904319 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-15 02:57:42.904435 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-15 02:57:42.904482 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-15 02:57:42.904495 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-15 02:57:42.904506 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-15 02:57:42.904519 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-15 02:57:42.904531 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-15 02:57:42.904544 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-15 02:57:42.904556 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-15 02:57:42.904569 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-15 02:57:42.904581 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-15 02:57:42.904595 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-15 02:57:42.904606 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-15 02:57:42.904618 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-15 02:57:42.904630 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-15 02:57:42.904643 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-15 02:57:42.904655 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-15 02:57:42.904668 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-15 02:57:42.904680 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-15 02:57:42.904693 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:57:42.904705 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-15 02:57:42.904717 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-15 02:57:42.904728 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-15 02:57:42.904740 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-15 02:57:42.904751 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-15 02:57:42.904763 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-02-15 02:57:42.904774 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-02-15 02:57:42.904786 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-02-15 02:57:42.904799 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-02-15 02:57:42.904810 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-02-15 02:57:42.904824 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:57:42.904835 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-02-15 02:57:42.904848 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-02-15 02:57:42.904862 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-02-15 02:57:42.904906 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-02-15 02:57:42.904929 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-02-15 02:57:42.904941 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:57:42.904954 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:57:42.904983 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-15 02:57:42.904997 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-15 02:57:42.905010 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-02-15 02:57:42.905024 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-15 02:57:42.905037 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-15 02:57:42.905070 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-02-15 02:57:42.905084 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-15 02:57:42.905098 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-15 02:57:42.905110 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-02-15 02:57:42.905123 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-15 02:57:42.905136 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-15 02:57:42.905149 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-02-15 02:57:42.905163 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-15 02:57:42.905175 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-15 02:57:42.905188 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-02-15 02:57:42.905201 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-15 02:57:42.905214 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-15 02:57:42.905227 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-02-15 02:57:42.905239 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-15 02:57:42.905251 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-15 02:57:42.905262 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-02-15 02:57:42.905274 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-15 02:57:42.905287 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-15 02:57:42.905299 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-02-15 02:57:42.905312 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-15 02:57:42.905324 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-15 02:57:42.905336 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-02-15 02:57:42.905350 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-15 02:57:42.905362 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-15 02:57:42.905374 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-02-15 02:57:42.905396 | orchestrator | 2026-02-15 02:57:42.905410 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-02-15 02:57:42.905422 | orchestrator | Sunday 15 February 2026 02:57:41 +0000 (0:00:03.660) 0:03:33.357 ******* 2026-02-15 02:57:42.905434 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-15 02:57:42.905446 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-15 02:57:42.905458 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-15 02:57:42.905470 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-15 02:57:42.905483 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-15 02:57:42.905495 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-15 02:57:42.905507 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-02-15 02:57:42.905519 | orchestrator | 2026-02-15 02:57:42.905532 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-02-15 02:57:42.905543 | orchestrator | Sunday 15 February 2026 02:57:42 +0000 (0:00:00.611) 0:03:33.969 ******* 2026-02-15 02:57:42.905555 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-15 02:57:42.905567 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:57:42.905579 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-15 02:57:42.905591 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:57:42.905610 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-15 02:57:42.905622 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:57:42.905635 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-15 02:57:42.905647 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:57:42.905659 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-15 02:57:42.905671 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-15 02:57:42.905691 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-15 02:57:56.787828 | orchestrator | 2026-02-15 02:57:56.787989 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-02-15 02:57:56.788003 | orchestrator | Sunday 15 February 2026 02:57:42 +0000 (0:00:00.506) 0:03:34.476 ******* 2026-02-15 02:57:56.788007 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-15 02:57:56.788013 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:57:56.788018 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-15 02:57:56.788022 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:57:56.788026 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-15 02:57:56.788031 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-02-15 02:57:56.788035 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:57:56.788038 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:57:56.788042 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-15 02:57:56.788046 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-15 02:57:56.788050 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-02-15 02:57:56.788054 | orchestrator | 2026-02-15 02:57:56.788057 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-02-15 02:57:56.788083 | orchestrator | Sunday 15 February 2026 02:57:43 +0000 (0:00:00.644) 0:03:35.121 ******* 2026-02-15 02:57:56.788090 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-15 02:57:56.788096 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:57:56.788102 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-15 02:57:56.788108 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-15 02:57:56.788114 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:57:56.788120 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:57:56.788126 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-02-15 02:57:56.788133 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:57:56.788139 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-15 02:57:56.788145 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-15 02:57:56.788151 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-02-15 02:57:56.788158 | orchestrator | 2026-02-15 02:57:56.788165 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-02-15 02:57:56.788171 | orchestrator | Sunday 15 February 2026 02:57:44 +0000 (0:00:00.628) 0:03:35.749 ******* 2026-02-15 02:57:56.788177 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:57:56.788183 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:57:56.788189 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:57:56.788195 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:57:56.788202 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:57:56.788208 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:57:56.788214 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:57:56.788220 | orchestrator | 2026-02-15 02:57:56.788227 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-02-15 02:57:56.788233 | orchestrator | Sunday 15 February 2026 02:57:44 +0000 (0:00:00.370) 0:03:36.119 ******* 2026-02-15 02:57:56.788239 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:56.788246 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:56.788253 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:56.788259 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:56.788265 | orchestrator | ok: [testbed-manager] 2026-02-15 02:57:56.788271 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:56.788277 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:56.788280 | orchestrator | 2026-02-15 02:57:56.788284 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-02-15 02:57:56.788288 | orchestrator | Sunday 15 February 2026 02:57:50 +0000 (0:00:05.904) 0:03:42.024 ******* 2026-02-15 02:57:56.788292 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-02-15 02:57:56.788297 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-02-15 02:57:56.788300 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:57:56.788304 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:57:56.788308 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-02-15 02:57:56.788312 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-02-15 02:57:56.788316 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:57:56.788319 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:57:56.788324 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-02-15 02:57:56.788328 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-02-15 02:57:56.788346 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:57:56.788350 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:57:56.788353 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-02-15 02:57:56.788357 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:57:56.788366 | orchestrator | 2026-02-15 02:57:56.788370 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-02-15 02:57:56.788373 | orchestrator | Sunday 15 February 2026 02:57:50 +0000 (0:00:00.334) 0:03:42.359 ******* 2026-02-15 02:57:56.788377 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-02-15 02:57:56.788381 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-02-15 02:57:56.788384 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-02-15 02:57:56.788401 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-02-15 02:57:56.788405 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-02-15 02:57:56.788409 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-02-15 02:57:56.788412 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-02-15 02:57:56.788416 | orchestrator | 2026-02-15 02:57:56.788420 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-02-15 02:57:56.788423 | orchestrator | Sunday 15 February 2026 02:57:51 +0000 (0:00:01.145) 0:03:43.504 ******* 2026-02-15 02:57:56.788429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:57:56.788435 | orchestrator | 2026-02-15 02:57:56.788439 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-02-15 02:57:56.788442 | orchestrator | Sunday 15 February 2026 02:57:52 +0000 (0:00:00.499) 0:03:44.003 ******* 2026-02-15 02:57:56.788446 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:56.788450 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:56.788453 | orchestrator | ok: [testbed-manager] 2026-02-15 02:57:56.788457 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:56.788461 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:56.788464 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:56.788468 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:56.788472 | orchestrator | 2026-02-15 02:57:56.788476 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-02-15 02:57:56.788479 | orchestrator | Sunday 15 February 2026 02:57:53 +0000 (0:00:01.275) 0:03:45.279 ******* 2026-02-15 02:57:56.788483 | orchestrator | ok: [testbed-manager] 2026-02-15 02:57:56.788487 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:56.788490 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:56.788494 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:56.788497 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:56.788501 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:56.788505 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:56.788508 | orchestrator | 2026-02-15 02:57:56.788512 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-02-15 02:57:56.788516 | orchestrator | Sunday 15 February 2026 02:57:54 +0000 (0:00:00.671) 0:03:45.951 ******* 2026-02-15 02:57:56.788519 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:57:56.788523 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:57:56.788527 | orchestrator | changed: [testbed-manager] 2026-02-15 02:57:56.788530 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:57:56.788534 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:57:56.788537 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:57:56.788541 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:57:56.788545 | orchestrator | 2026-02-15 02:57:56.788548 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-02-15 02:57:56.788553 | orchestrator | Sunday 15 February 2026 02:57:55 +0000 (0:00:00.680) 0:03:46.632 ******* 2026-02-15 02:57:56.788559 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:57:56.788564 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:57:56.788570 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:57:56.788578 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:57:56.788586 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:57:56.788592 | orchestrator | ok: [testbed-manager] 2026-02-15 02:57:56.788597 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:57:56.788603 | orchestrator | 2026-02-15 02:57:56.788608 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-02-15 02:57:56.788619 | orchestrator | Sunday 15 February 2026 02:57:55 +0000 (0:00:00.610) 0:03:47.242 ******* 2026-02-15 02:57:56.788627 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771122643.5457244, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:57:56.788635 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771122641.9902833, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:57:56.788646 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771122631.4773724, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:57:56.788659 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771122626.1461022, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906162 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771122645.7640517, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906246 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771122643.487497, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906254 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1771122651.1152523, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906276 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906282 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906298 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906303 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906325 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906330 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906335 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 02:58:01.906345 | orchestrator | 2026-02-15 02:58:01.906351 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-02-15 02:58:01.906357 | orchestrator | Sunday 15 February 2026 02:57:56 +0000 (0:00:01.115) 0:03:48.358 ******* 2026-02-15 02:58:01.906362 | orchestrator | changed: [testbed-manager] 2026-02-15 02:58:01.906368 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:58:01.906373 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:58:01.906378 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:58:01.906383 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:58:01.906388 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:58:01.906393 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:58:01.906398 | orchestrator | 2026-02-15 02:58:01.906403 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-02-15 02:58:01.906408 | orchestrator | Sunday 15 February 2026 02:57:57 +0000 (0:00:01.147) 0:03:49.505 ******* 2026-02-15 02:58:01.906412 | orchestrator | changed: [testbed-manager] 2026-02-15 02:58:01.906417 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:58:01.906422 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:58:01.906427 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:58:01.906431 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:58:01.906436 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:58:01.906441 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:58:01.906445 | orchestrator | 2026-02-15 02:58:01.906450 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-02-15 02:58:01.906455 | orchestrator | Sunday 15 February 2026 02:57:59 +0000 (0:00:01.189) 0:03:50.695 ******* 2026-02-15 02:58:01.906460 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:58:01.906464 | orchestrator | changed: [testbed-manager] 2026-02-15 02:58:01.906469 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:58:01.906474 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:58:01.906479 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:58:01.906483 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:58:01.906488 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:58:01.906493 | orchestrator | 2026-02-15 02:58:01.906498 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-02-15 02:58:01.906502 | orchestrator | Sunday 15 February 2026 02:58:00 +0000 (0:00:01.255) 0:03:51.951 ******* 2026-02-15 02:58:01.906507 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:58:01.906512 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:58:01.906520 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:58:01.906525 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:58:01.906529 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:58:01.906534 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:58:01.906539 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:58:01.906543 | orchestrator | 2026-02-15 02:58:01.906548 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-02-15 02:58:01.906553 | orchestrator | Sunday 15 February 2026 02:58:00 +0000 (0:00:00.304) 0:03:52.255 ******* 2026-02-15 02:58:01.906561 | orchestrator | ok: [testbed-manager] 2026-02-15 02:58:01.906570 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:58:01.906578 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:58:01.906584 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:58:01.906591 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:58:01.906598 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:58:01.906605 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:58:01.906613 | orchestrator | 2026-02-15 02:58:01.906621 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-02-15 02:58:01.906630 | orchestrator | Sunday 15 February 2026 02:58:01 +0000 (0:00:00.796) 0:03:53.052 ******* 2026-02-15 02:58:01.906641 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:58:01.906657 | orchestrator | 2026-02-15 02:58:01.906666 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-02-15 02:58:01.906675 | orchestrator | Sunday 15 February 2026 02:58:01 +0000 (0:00:00.427) 0:03:53.480 ******* 2026-02-15 02:59:21.007141 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:21.007317 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:59:21.007336 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:59:21.007349 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:59:21.007377 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:59:21.007399 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:59:21.007411 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:59:21.007422 | orchestrator | 2026-02-15 02:59:21.007435 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-02-15 02:59:21.007446 | orchestrator | Sunday 15 February 2026 02:58:10 +0000 (0:00:08.409) 0:04:01.889 ******* 2026-02-15 02:59:21.007458 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:21.007469 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:21.007480 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:21.007491 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:21.007502 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:21.007513 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:21.007524 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:21.007534 | orchestrator | 2026-02-15 02:59:21.007546 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-02-15 02:59:21.007557 | orchestrator | Sunday 15 February 2026 02:58:11 +0000 (0:00:01.128) 0:04:03.018 ******* 2026-02-15 02:59:21.007568 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:21.007579 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:21.007590 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:21.007601 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:21.007612 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:21.007622 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:21.007633 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:21.007644 | orchestrator | 2026-02-15 02:59:21.007655 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-02-15 02:59:21.007666 | orchestrator | Sunday 15 February 2026 02:58:12 +0000 (0:00:01.140) 0:04:04.158 ******* 2026-02-15 02:59:21.007679 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:21.007691 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:21.007703 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:21.007716 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:21.007729 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:21.007742 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:21.007755 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:21.007768 | orchestrator | 2026-02-15 02:59:21.007781 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-02-15 02:59:21.007795 | orchestrator | Sunday 15 February 2026 02:58:12 +0000 (0:00:00.330) 0:04:04.488 ******* 2026-02-15 02:59:21.007808 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:21.007820 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:21.007832 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:21.007844 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:21.007856 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:21.007868 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:21.007881 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:21.007893 | orchestrator | 2026-02-15 02:59:21.007905 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-02-15 02:59:21.007918 | orchestrator | Sunday 15 February 2026 02:58:13 +0000 (0:00:00.454) 0:04:04.943 ******* 2026-02-15 02:59:21.007930 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:21.007941 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:21.007952 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:21.007986 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:21.007997 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:21.008008 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:21.008019 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:21.008030 | orchestrator | 2026-02-15 02:59:21.008041 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-02-15 02:59:21.008052 | orchestrator | Sunday 15 February 2026 02:58:13 +0000 (0:00:00.401) 0:04:05.344 ******* 2026-02-15 02:59:21.008063 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:21.008074 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:21.008084 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:21.008095 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:21.008106 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:21.008116 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:21.008127 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:21.008138 | orchestrator | 2026-02-15 02:59:21.008149 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-02-15 02:59:21.008189 | orchestrator | Sunday 15 February 2026 02:58:19 +0000 (0:00:05.646) 0:04:10.991 ******* 2026-02-15 02:59:21.008203 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:59:21.008216 | orchestrator | 2026-02-15 02:59:21.008227 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-02-15 02:59:21.008238 | orchestrator | Sunday 15 February 2026 02:58:19 +0000 (0:00:00.505) 0:04:11.497 ******* 2026-02-15 02:59:21.008249 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-02-15 02:59:21.008260 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-02-15 02:59:21.008271 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-02-15 02:59:21.008281 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-02-15 02:59:21.008292 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:59:21.008321 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-02-15 02:59:21.008332 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-02-15 02:59:21.008343 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:59:21.008354 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:59:21.008365 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-02-15 02:59:21.008375 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-02-15 02:59:21.008387 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-02-15 02:59:21.008397 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:59:21.008408 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-02-15 02:59:21.008419 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-02-15 02:59:21.008430 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-02-15 02:59:21.008458 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:59:21.008470 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:59:21.008481 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-02-15 02:59:21.008492 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-02-15 02:59:21.008502 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:59:21.008513 | orchestrator | 2026-02-15 02:59:21.008524 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-02-15 02:59:21.008535 | orchestrator | Sunday 15 February 2026 02:58:20 +0000 (0:00:00.391) 0:04:11.888 ******* 2026-02-15 02:59:21.008546 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:59:21.008557 | orchestrator | 2026-02-15 02:59:21.008568 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-02-15 02:59:21.008588 | orchestrator | Sunday 15 February 2026 02:58:20 +0000 (0:00:00.426) 0:04:12.315 ******* 2026-02-15 02:59:21.008599 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-02-15 02:59:21.008610 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-02-15 02:59:21.008621 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:59:21.008632 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-02-15 02:59:21.008642 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:59:21.008653 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-02-15 02:59:21.008664 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:59:21.008675 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:59:21.008685 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-02-15 02:59:21.008696 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-02-15 02:59:21.008707 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:59:21.008717 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:59:21.008728 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-02-15 02:59:21.008739 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:59:21.008750 | orchestrator | 2026-02-15 02:59:21.008760 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-02-15 02:59:21.008771 | orchestrator | Sunday 15 February 2026 02:58:21 +0000 (0:00:00.380) 0:04:12.695 ******* 2026-02-15 02:59:21.008783 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:59:21.008794 | orchestrator | 2026-02-15 02:59:21.008804 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-02-15 02:59:21.008815 | orchestrator | Sunday 15 February 2026 02:58:21 +0000 (0:00:00.446) 0:04:13.142 ******* 2026-02-15 02:59:21.008826 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:59:21.008837 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:59:21.008848 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:59:21.008858 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:59:21.008869 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:59:21.008880 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:59:21.008891 | orchestrator | changed: [testbed-manager] 2026-02-15 02:59:21.008901 | orchestrator | 2026-02-15 02:59:21.008912 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-02-15 02:59:21.008923 | orchestrator | Sunday 15 February 2026 02:58:55 +0000 (0:00:34.083) 0:04:47.226 ******* 2026-02-15 02:59:21.008934 | orchestrator | changed: [testbed-manager] 2026-02-15 02:59:21.008944 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:59:21.008955 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:59:21.008966 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:59:21.008976 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:59:21.008987 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:59:21.008998 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:59:21.009008 | orchestrator | 2026-02-15 02:59:21.009019 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-02-15 02:59:21.009035 | orchestrator | Sunday 15 February 2026 02:59:04 +0000 (0:00:08.441) 0:04:55.667 ******* 2026-02-15 02:59:21.009046 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:59:21.009057 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:59:21.009067 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:59:21.009078 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:59:21.009088 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:59:21.009099 | orchestrator | changed: [testbed-manager] 2026-02-15 02:59:21.009110 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:59:21.009120 | orchestrator | 2026-02-15 02:59:21.009131 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-02-15 02:59:21.009149 | orchestrator | Sunday 15 February 2026 02:59:12 +0000 (0:00:08.332) 0:05:04.000 ******* 2026-02-15 02:59:21.009180 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:21.009192 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:21.009202 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:21.009213 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:21.009224 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:21.009235 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:21.009245 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:21.009256 | orchestrator | 2026-02-15 02:59:21.009267 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-02-15 02:59:21.009278 | orchestrator | Sunday 15 February 2026 02:59:14 +0000 (0:00:01.866) 0:05:05.866 ******* 2026-02-15 02:59:21.009289 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:59:21.009300 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:59:21.009310 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:59:21.009321 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:59:21.009332 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:59:21.009343 | orchestrator | changed: [testbed-manager] 2026-02-15 02:59:21.009353 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:59:21.009364 | orchestrator | 2026-02-15 02:59:21.009382 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-02-15 02:59:33.004765 | orchestrator | Sunday 15 February 2026 02:59:20 +0000 (0:00:06.704) 0:05:12.571 ******* 2026-02-15 02:59:33.004912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:59:33.004943 | orchestrator | 2026-02-15 02:59:33.004962 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-02-15 02:59:33.004981 | orchestrator | Sunday 15 February 2026 02:59:21 +0000 (0:00:00.476) 0:05:13.047 ******* 2026-02-15 02:59:33.005000 | orchestrator | changed: [testbed-manager] 2026-02-15 02:59:33.005019 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:59:33.005035 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:59:33.005057 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:59:33.005076 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:59:33.005092 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:59:33.005111 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:59:33.005128 | orchestrator | 2026-02-15 02:59:33.005146 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-02-15 02:59:33.005164 | orchestrator | Sunday 15 February 2026 02:59:22 +0000 (0:00:00.811) 0:05:13.859 ******* 2026-02-15 02:59:33.005182 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:33.005264 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:33.005286 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:33.005306 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:33.005327 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:33.005348 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:33.005367 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:33.005386 | orchestrator | 2026-02-15 02:59:33.005400 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-02-15 02:59:33.005414 | orchestrator | Sunday 15 February 2026 02:59:24 +0000 (0:00:01.837) 0:05:15.696 ******* 2026-02-15 02:59:33.005426 | orchestrator | changed: [testbed-node-0] 2026-02-15 02:59:33.005439 | orchestrator | changed: [testbed-node-4] 2026-02-15 02:59:33.005452 | orchestrator | changed: [testbed-node-3] 2026-02-15 02:59:33.005465 | orchestrator | changed: [testbed-node-1] 2026-02-15 02:59:33.005477 | orchestrator | changed: [testbed-node-5] 2026-02-15 02:59:33.005491 | orchestrator | changed: [testbed-manager] 2026-02-15 02:59:33.005504 | orchestrator | changed: [testbed-node-2] 2026-02-15 02:59:33.005517 | orchestrator | 2026-02-15 02:59:33.005529 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-02-15 02:59:33.005543 | orchestrator | Sunday 15 February 2026 02:59:24 +0000 (0:00:00.807) 0:05:16.503 ******* 2026-02-15 02:59:33.005582 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:59:33.005595 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:59:33.005608 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:59:33.005619 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:59:33.005629 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:59:33.005640 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:59:33.005651 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:59:33.005661 | orchestrator | 2026-02-15 02:59:33.005672 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-02-15 02:59:33.005683 | orchestrator | Sunday 15 February 2026 02:59:25 +0000 (0:00:00.344) 0:05:16.847 ******* 2026-02-15 02:59:33.005693 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:59:33.005704 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:59:33.005714 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:59:33.005725 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:59:33.005735 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:59:33.005748 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:59:33.005767 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:59:33.005790 | orchestrator | 2026-02-15 02:59:33.005815 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-02-15 02:59:33.005833 | orchestrator | Sunday 15 February 2026 02:59:25 +0000 (0:00:00.440) 0:05:17.288 ******* 2026-02-15 02:59:33.005851 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:33.005869 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:33.005885 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:33.005901 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:33.005917 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:33.005934 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:33.005953 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:33.005972 | orchestrator | 2026-02-15 02:59:33.005990 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-02-15 02:59:33.006111 | orchestrator | Sunday 15 February 2026 02:59:26 +0000 (0:00:00.352) 0:05:17.641 ******* 2026-02-15 02:59:33.006129 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:59:33.006140 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:59:33.006151 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:59:33.006162 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:59:33.006172 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:59:33.006183 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:59:33.006220 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:59:33.006231 | orchestrator | 2026-02-15 02:59:33.006242 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-02-15 02:59:33.006254 | orchestrator | Sunday 15 February 2026 02:59:26 +0000 (0:00:00.328) 0:05:17.969 ******* 2026-02-15 02:59:33.006265 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:33.006275 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:33.006286 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:33.006296 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:33.006307 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:33.006317 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:33.006328 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:33.006339 | orchestrator | 2026-02-15 02:59:33.006350 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-02-15 02:59:33.006360 | orchestrator | Sunday 15 February 2026 02:59:26 +0000 (0:00:00.364) 0:05:18.334 ******* 2026-02-15 02:59:33.006371 | orchestrator | ok: [testbed-manager] =>  2026-02-15 02:59:33.006382 | orchestrator |  docker_version: 5:27.5.1 2026-02-15 02:59:33.006392 | orchestrator | ok: [testbed-node-3] =>  2026-02-15 02:59:33.006403 | orchestrator |  docker_version: 5:27.5.1 2026-02-15 02:59:33.006413 | orchestrator | ok: [testbed-node-4] =>  2026-02-15 02:59:33.006424 | orchestrator |  docker_version: 5:27.5.1 2026-02-15 02:59:33.006434 | orchestrator | ok: [testbed-node-5] =>  2026-02-15 02:59:33.006444 | orchestrator |  docker_version: 5:27.5.1 2026-02-15 02:59:33.006490 | orchestrator | ok: [testbed-node-0] =>  2026-02-15 02:59:33.006502 | orchestrator |  docker_version: 5:27.5.1 2026-02-15 02:59:33.006513 | orchestrator | ok: [testbed-node-1] =>  2026-02-15 02:59:33.006524 | orchestrator |  docker_version: 5:27.5.1 2026-02-15 02:59:33.006534 | orchestrator | ok: [testbed-node-2] =>  2026-02-15 02:59:33.006544 | orchestrator |  docker_version: 5:27.5.1 2026-02-15 02:59:33.006555 | orchestrator | 2026-02-15 02:59:33.006566 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-02-15 02:59:33.006576 | orchestrator | Sunday 15 February 2026 02:59:27 +0000 (0:00:00.359) 0:05:18.694 ******* 2026-02-15 02:59:33.006587 | orchestrator | ok: [testbed-manager] =>  2026-02-15 02:59:33.006597 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-15 02:59:33.006608 | orchestrator | ok: [testbed-node-3] =>  2026-02-15 02:59:33.006619 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-15 02:59:33.006629 | orchestrator | ok: [testbed-node-4] =>  2026-02-15 02:59:33.006640 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-15 02:59:33.006650 | orchestrator | ok: [testbed-node-5] =>  2026-02-15 02:59:33.006660 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-15 02:59:33.006671 | orchestrator | ok: [testbed-node-0] =>  2026-02-15 02:59:33.006681 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-15 02:59:33.006691 | orchestrator | ok: [testbed-node-1] =>  2026-02-15 02:59:33.006702 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-15 02:59:33.006712 | orchestrator | ok: [testbed-node-2] =>  2026-02-15 02:59:33.006723 | orchestrator |  docker_cli_version: 5:27.5.1 2026-02-15 02:59:33.006733 | orchestrator | 2026-02-15 02:59:33.006744 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-02-15 02:59:33.006755 | orchestrator | Sunday 15 February 2026 02:59:27 +0000 (0:00:00.332) 0:05:19.026 ******* 2026-02-15 02:59:33.006765 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:59:33.006776 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:59:33.006786 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:59:33.006797 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:59:33.006807 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:59:33.006817 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:59:33.006828 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:59:33.006838 | orchestrator | 2026-02-15 02:59:33.006849 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-02-15 02:59:33.006860 | orchestrator | Sunday 15 February 2026 02:59:27 +0000 (0:00:00.347) 0:05:19.374 ******* 2026-02-15 02:59:33.006870 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:59:33.006881 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:59:33.006891 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:59:33.006901 | orchestrator | skipping: [testbed-node-5] 2026-02-15 02:59:33.006912 | orchestrator | skipping: [testbed-node-0] 2026-02-15 02:59:33.006926 | orchestrator | skipping: [testbed-node-1] 2026-02-15 02:59:33.006944 | orchestrator | skipping: [testbed-node-2] 2026-02-15 02:59:33.006961 | orchestrator | 2026-02-15 02:59:33.006980 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-02-15 02:59:33.006998 | orchestrator | Sunday 15 February 2026 02:59:28 +0000 (0:00:00.337) 0:05:19.711 ******* 2026-02-15 02:59:33.007019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 02:59:33.007042 | orchestrator | 2026-02-15 02:59:33.007061 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-02-15 02:59:33.007079 | orchestrator | Sunday 15 February 2026 02:59:28 +0000 (0:00:00.502) 0:05:20.214 ******* 2026-02-15 02:59:33.007097 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:33.007116 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:33.007136 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:33.007154 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:33.007172 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:33.007220 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:33.007232 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:33.007264 | orchestrator | 2026-02-15 02:59:33.007284 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-02-15 02:59:33.007304 | orchestrator | Sunday 15 February 2026 02:59:29 +0000 (0:00:00.985) 0:05:21.200 ******* 2026-02-15 02:59:33.007323 | orchestrator | ok: [testbed-node-4] 2026-02-15 02:59:33.007341 | orchestrator | ok: [testbed-node-3] 2026-02-15 02:59:33.007359 | orchestrator | ok: [testbed-node-1] 2026-02-15 02:59:33.007379 | orchestrator | ok: [testbed-node-0] 2026-02-15 02:59:33.007396 | orchestrator | ok: [testbed-node-2] 2026-02-15 02:59:33.007424 | orchestrator | ok: [testbed-manager] 2026-02-15 02:59:33.007435 | orchestrator | ok: [testbed-node-5] 2026-02-15 02:59:33.007446 | orchestrator | 2026-02-15 02:59:33.007457 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-02-15 02:59:33.007469 | orchestrator | Sunday 15 February 2026 02:59:32 +0000 (0:00:02.964) 0:05:24.164 ******* 2026-02-15 02:59:33.007480 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-02-15 02:59:33.007491 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-02-15 02:59:33.007502 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-02-15 02:59:33.007512 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-02-15 02:59:33.007523 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-02-15 02:59:33.007534 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-02-15 02:59:33.007545 | orchestrator | skipping: [testbed-manager] 2026-02-15 02:59:33.007555 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-02-15 02:59:33.007566 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-02-15 02:59:33.007577 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-02-15 02:59:33.007588 | orchestrator | skipping: [testbed-node-3] 2026-02-15 02:59:33.007598 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-02-15 02:59:33.007609 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-02-15 02:59:33.007619 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-02-15 02:59:33.007630 | orchestrator | skipping: [testbed-node-4] 2026-02-15 02:59:33.007641 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-02-15 02:59:33.007666 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-02-15 03:00:34.586167 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-02-15 03:00:34.586285 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:00:34.586298 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-02-15 03:00:34.586306 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-02-15 03:00:34.586315 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-02-15 03:00:34.586322 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:00:34.586330 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:00:34.586338 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-02-15 03:00:34.586346 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-02-15 03:00:34.586400 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-02-15 03:00:34.586408 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:00:34.586416 | orchestrator | 2026-02-15 03:00:34.586425 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-02-15 03:00:34.586435 | orchestrator | Sunday 15 February 2026 02:59:33 +0000 (0:00:00.692) 0:05:24.857 ******* 2026-02-15 03:00:34.586443 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:34.586451 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.586459 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.586467 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.586475 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.586483 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.586514 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.586528 | orchestrator | 2026-02-15 03:00:34.586541 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-02-15 03:00:34.586554 | orchestrator | Sunday 15 February 2026 02:59:40 +0000 (0:00:07.088) 0:05:31.945 ******* 2026-02-15 03:00:34.586567 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.586579 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.586593 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.586607 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.586620 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:34.586633 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.586647 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.586660 | orchestrator | 2026-02-15 03:00:34.586673 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-02-15 03:00:34.586687 | orchestrator | Sunday 15 February 2026 02:59:41 +0000 (0:00:01.075) 0:05:33.021 ******* 2026-02-15 03:00:34.586701 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:34.586716 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.586726 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.586736 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.586745 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.586754 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.586763 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.586772 | orchestrator | 2026-02-15 03:00:34.586782 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-02-15 03:00:34.586792 | orchestrator | Sunday 15 February 2026 02:59:49 +0000 (0:00:08.449) 0:05:41.470 ******* 2026-02-15 03:00:34.586801 | orchestrator | changed: [testbed-manager] 2026-02-15 03:00:34.586810 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.586820 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.586829 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.586839 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.586848 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.586858 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.586867 | orchestrator | 2026-02-15 03:00:34.586876 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-02-15 03:00:34.586885 | orchestrator | Sunday 15 February 2026 02:59:53 +0000 (0:00:03.377) 0:05:44.848 ******* 2026-02-15 03:00:34.586895 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:34.586904 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.586913 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.586922 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.586931 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.586941 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.586950 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.586958 | orchestrator | 2026-02-15 03:00:34.586968 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-02-15 03:00:34.586979 | orchestrator | Sunday 15 February 2026 02:59:54 +0000 (0:00:01.363) 0:05:46.212 ******* 2026-02-15 03:00:34.586994 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:34.587008 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.587020 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.587033 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.587046 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.587061 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.587075 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.587087 | orchestrator | 2026-02-15 03:00:34.587101 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-02-15 03:00:34.587110 | orchestrator | Sunday 15 February 2026 02:59:56 +0000 (0:00:01.666) 0:05:47.879 ******* 2026-02-15 03:00:34.587118 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:00:34.587126 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:00:34.587134 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:00:34.587141 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:00:34.587158 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:00:34.587166 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:00:34.587174 | orchestrator | changed: [testbed-manager] 2026-02-15 03:00:34.587182 | orchestrator | 2026-02-15 03:00:34.587190 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-02-15 03:00:34.587198 | orchestrator | Sunday 15 February 2026 02:59:56 +0000 (0:00:00.643) 0:05:48.522 ******* 2026-02-15 03:00:34.587205 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:34.587213 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.587221 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.587228 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.587236 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.587244 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.587251 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.587259 | orchestrator | 2026-02-15 03:00:34.587267 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-02-15 03:00:34.587293 | orchestrator | Sunday 15 February 2026 03:00:06 +0000 (0:00:09.354) 0:05:57.876 ******* 2026-02-15 03:00:34.587302 | orchestrator | changed: [testbed-manager] 2026-02-15 03:00:34.587309 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.587317 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.587325 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.587333 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.587340 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.587401 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.587411 | orchestrator | 2026-02-15 03:00:34.587420 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-02-15 03:00:34.587428 | orchestrator | Sunday 15 February 2026 03:00:07 +0000 (0:00:00.993) 0:05:58.869 ******* 2026-02-15 03:00:34.587436 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:34.587443 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.587451 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.587459 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.587466 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.587474 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.587482 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.587489 | orchestrator | 2026-02-15 03:00:34.587497 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-02-15 03:00:34.587505 | orchestrator | Sunday 15 February 2026 03:00:16 +0000 (0:00:08.918) 0:06:07.788 ******* 2026-02-15 03:00:34.587513 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:34.587520 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.587528 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.587536 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.587543 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.587551 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.587559 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.587566 | orchestrator | 2026-02-15 03:00:34.587574 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-02-15 03:00:34.587582 | orchestrator | Sunday 15 February 2026 03:00:27 +0000 (0:00:11.547) 0:06:19.335 ******* 2026-02-15 03:00:34.587590 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-02-15 03:00:34.587598 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-02-15 03:00:34.587605 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-02-15 03:00:34.587613 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-02-15 03:00:34.587621 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-02-15 03:00:34.587629 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-02-15 03:00:34.587636 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-02-15 03:00:34.587644 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-02-15 03:00:34.587652 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-02-15 03:00:34.587666 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-02-15 03:00:34.587673 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-02-15 03:00:34.587725 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-02-15 03:00:34.587734 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-02-15 03:00:34.587742 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-02-15 03:00:34.587750 | orchestrator | 2026-02-15 03:00:34.587758 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-02-15 03:00:34.587766 | orchestrator | Sunday 15 February 2026 03:00:29 +0000 (0:00:01.263) 0:06:20.599 ******* 2026-02-15 03:00:34.587773 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:00:34.587781 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:00:34.587789 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:00:34.587796 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:00:34.587806 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:00:34.587818 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:00:34.587831 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:00:34.587842 | orchestrator | 2026-02-15 03:00:34.587854 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-02-15 03:00:34.587867 | orchestrator | Sunday 15 February 2026 03:00:29 +0000 (0:00:00.591) 0:06:21.191 ******* 2026-02-15 03:00:34.587880 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:34.587893 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:34.587906 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:34.587919 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:34.587931 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:34.587944 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:34.587963 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:34.587976 | orchestrator | 2026-02-15 03:00:34.587990 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-02-15 03:00:34.588005 | orchestrator | Sunday 15 February 2026 03:00:33 +0000 (0:00:03.890) 0:06:25.081 ******* 2026-02-15 03:00:34.588020 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:00:34.588034 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:00:34.588047 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:00:34.588059 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:00:34.588072 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:00:34.588086 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:00:34.588098 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:00:34.588111 | orchestrator | 2026-02-15 03:00:34.588124 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-02-15 03:00:34.588138 | orchestrator | Sunday 15 February 2026 03:00:34 +0000 (0:00:00.561) 0:06:25.643 ******* 2026-02-15 03:00:34.588151 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-02-15 03:00:34.588164 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-02-15 03:00:34.588177 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:00:34.588191 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-02-15 03:00:34.588204 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-02-15 03:00:34.588218 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:00:34.588232 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-02-15 03:00:34.588245 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-02-15 03:00:34.588258 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:00:34.588285 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-02-15 03:00:55.681098 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-02-15 03:00:55.681191 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:00:55.681202 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-02-15 03:00:55.681211 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-02-15 03:00:55.681218 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:00:55.681243 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-02-15 03:00:55.681251 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-02-15 03:00:55.681258 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:00:55.681265 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-02-15 03:00:55.681273 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-02-15 03:00:55.681280 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:00:55.681287 | orchestrator | 2026-02-15 03:00:55.681296 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-02-15 03:00:55.681304 | orchestrator | Sunday 15 February 2026 03:00:34 +0000 (0:00:00.809) 0:06:26.452 ******* 2026-02-15 03:00:55.681311 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:00:55.681318 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:00:55.681325 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:00:55.681332 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:00:55.681339 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:00:55.681346 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:00:55.681353 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:00:55.681360 | orchestrator | 2026-02-15 03:00:55.681367 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-02-15 03:00:55.681390 | orchestrator | Sunday 15 February 2026 03:00:35 +0000 (0:00:00.565) 0:06:27.017 ******* 2026-02-15 03:00:55.681397 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:00:55.681458 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:00:55.681467 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:00:55.681474 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:00:55.681481 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:00:55.681488 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:00:55.681495 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:00:55.681502 | orchestrator | 2026-02-15 03:00:55.681509 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-02-15 03:00:55.681517 | orchestrator | Sunday 15 February 2026 03:00:36 +0000 (0:00:00.605) 0:06:27.622 ******* 2026-02-15 03:00:55.681524 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:00:55.681531 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:00:55.681538 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:00:55.681545 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:00:55.681552 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:00:55.681559 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:00:55.681566 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:00:55.681573 | orchestrator | 2026-02-15 03:00:55.681582 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-02-15 03:00:55.681593 | orchestrator | Sunday 15 February 2026 03:00:36 +0000 (0:00:00.576) 0:06:28.199 ******* 2026-02-15 03:00:55.681609 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:55.681625 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:00:55.681637 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:00:55.681647 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:00:55.681660 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:00:55.681673 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:00:55.681687 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:00:55.681699 | orchestrator | 2026-02-15 03:00:55.681711 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-02-15 03:00:55.681721 | orchestrator | Sunday 15 February 2026 03:00:38 +0000 (0:00:01.976) 0:06:30.175 ******* 2026-02-15 03:00:55.681730 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:00:55.681741 | orchestrator | 2026-02-15 03:00:55.681749 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-02-15 03:00:55.681758 | orchestrator | Sunday 15 February 2026 03:00:39 +0000 (0:00:00.981) 0:06:31.157 ******* 2026-02-15 03:00:55.681781 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:55.681790 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:55.681798 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:55.681807 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:55.681815 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:55.681824 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:55.681832 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:55.681840 | orchestrator | 2026-02-15 03:00:55.681849 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-02-15 03:00:55.681857 | orchestrator | Sunday 15 February 2026 03:00:40 +0000 (0:00:00.862) 0:06:32.019 ******* 2026-02-15 03:00:55.681865 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:55.681874 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:55.681883 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:55.681891 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:55.681899 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:55.681907 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:55.681915 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:55.681923 | orchestrator | 2026-02-15 03:00:55.681932 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-02-15 03:00:55.681940 | orchestrator | Sunday 15 February 2026 03:00:41 +0000 (0:00:00.914) 0:06:32.934 ******* 2026-02-15 03:00:55.681949 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:55.681957 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:55.681965 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:55.681973 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:55.681981 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:55.681990 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:55.681998 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:55.682006 | orchestrator | 2026-02-15 03:00:55.682070 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-02-15 03:00:55.682096 | orchestrator | Sunday 15 February 2026 03:00:42 +0000 (0:00:01.649) 0:06:34.583 ******* 2026-02-15 03:00:55.682104 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:00:55.682111 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:00:55.682118 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:00:55.682126 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:00:55.682133 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:00:55.682140 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:00:55.682147 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:00:55.682154 | orchestrator | 2026-02-15 03:00:55.682161 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-02-15 03:00:55.682169 | orchestrator | Sunday 15 February 2026 03:00:44 +0000 (0:00:01.388) 0:06:35.972 ******* 2026-02-15 03:00:55.682176 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:55.682183 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:55.682190 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:55.682197 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:55.682204 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:55.682211 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:55.682218 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:55.682225 | orchestrator | 2026-02-15 03:00:55.682233 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-02-15 03:00:55.682240 | orchestrator | Sunday 15 February 2026 03:00:45 +0000 (0:00:01.401) 0:06:37.373 ******* 2026-02-15 03:00:55.682247 | orchestrator | changed: [testbed-manager] 2026-02-15 03:00:55.682254 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:00:55.682261 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:00:55.682268 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:00:55.682276 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:00:55.682283 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:00:55.682290 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:00:55.682297 | orchestrator | 2026-02-15 03:00:55.682312 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-02-15 03:00:55.682319 | orchestrator | Sunday 15 February 2026 03:00:47 +0000 (0:00:01.449) 0:06:38.823 ******* 2026-02-15 03:00:55.682327 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:00:55.682334 | orchestrator | 2026-02-15 03:00:55.682341 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-02-15 03:00:55.682348 | orchestrator | Sunday 15 February 2026 03:00:48 +0000 (0:00:01.117) 0:06:39.940 ******* 2026-02-15 03:00:55.682355 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:00:55.682363 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:00:55.682370 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:55.682377 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:00:55.682384 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:00:55.682391 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:00:55.682398 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:00:55.682454 | orchestrator | 2026-02-15 03:00:55.682463 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-02-15 03:00:55.682470 | orchestrator | Sunday 15 February 2026 03:00:49 +0000 (0:00:01.397) 0:06:41.338 ******* 2026-02-15 03:00:55.682478 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:55.682485 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:00:55.682492 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:00:55.682499 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:00:55.682506 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:00:55.682513 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:00:55.682520 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:00:55.682527 | orchestrator | 2026-02-15 03:00:55.682534 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-02-15 03:00:55.682541 | orchestrator | Sunday 15 February 2026 03:00:51 +0000 (0:00:01.924) 0:06:43.262 ******* 2026-02-15 03:00:55.682550 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:55.682562 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:00:55.682579 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:00:55.682592 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:00:55.682603 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:00:55.682614 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:00:55.682625 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:00:55.682637 | orchestrator | 2026-02-15 03:00:55.682647 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-02-15 03:00:55.682660 | orchestrator | Sunday 15 February 2026 03:00:52 +0000 (0:00:01.198) 0:06:44.460 ******* 2026-02-15 03:00:55.682671 | orchestrator | ok: [testbed-manager] 2026-02-15 03:00:55.682698 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:00:55.682709 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:00:55.682720 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:00:55.682732 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:00:55.682743 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:00:55.682756 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:00:55.682766 | orchestrator | 2026-02-15 03:00:55.682776 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-02-15 03:00:55.682788 | orchestrator | Sunday 15 February 2026 03:00:54 +0000 (0:00:01.402) 0:06:45.862 ******* 2026-02-15 03:00:55.682801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:00:55.682814 | orchestrator | 2026-02-15 03:00:55.682826 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-15 03:00:55.682838 | orchestrator | Sunday 15 February 2026 03:00:55 +0000 (0:00:01.035) 0:06:46.898 ******* 2026-02-15 03:00:55.682850 | orchestrator | 2026-02-15 03:00:55.682863 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-15 03:00:55.682889 | orchestrator | Sunday 15 February 2026 03:00:55 +0000 (0:00:00.070) 0:06:46.969 ******* 2026-02-15 03:00:55.682900 | orchestrator | 2026-02-15 03:00:55.682913 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-15 03:00:55.682921 | orchestrator | Sunday 15 February 2026 03:00:55 +0000 (0:00:00.059) 0:06:47.028 ******* 2026-02-15 03:00:55.682928 | orchestrator | 2026-02-15 03:00:55.682936 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-15 03:00:55.682952 | orchestrator | Sunday 15 February 2026 03:00:55 +0000 (0:00:00.041) 0:06:47.070 ******* 2026-02-15 03:01:22.803421 | orchestrator | 2026-02-15 03:01:22.803598 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-15 03:01:22.803633 | orchestrator | Sunday 15 February 2026 03:00:55 +0000 (0:00:00.040) 0:06:47.110 ******* 2026-02-15 03:01:22.803657 | orchestrator | 2026-02-15 03:01:22.803676 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-15 03:01:22.803695 | orchestrator | Sunday 15 February 2026 03:00:55 +0000 (0:00:00.049) 0:06:47.160 ******* 2026-02-15 03:01:22.803713 | orchestrator | 2026-02-15 03:01:22.803732 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-02-15 03:01:22.803750 | orchestrator | Sunday 15 February 2026 03:00:55 +0000 (0:00:00.043) 0:06:47.203 ******* 2026-02-15 03:01:22.803770 | orchestrator | 2026-02-15 03:01:22.803788 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-02-15 03:01:22.803806 | orchestrator | Sunday 15 February 2026 03:00:55 +0000 (0:00:00.041) 0:06:47.244 ******* 2026-02-15 03:01:22.803825 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:22.803845 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:22.803865 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:22.803885 | orchestrator | 2026-02-15 03:01:22.803903 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-02-15 03:01:22.803923 | orchestrator | Sunday 15 February 2026 03:00:57 +0000 (0:00:01.520) 0:06:48.765 ******* 2026-02-15 03:01:22.803941 | orchestrator | changed: [testbed-manager] 2026-02-15 03:01:22.803961 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:01:22.803980 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:01:22.804000 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:01:22.804019 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:01:22.804038 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:01:22.804057 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:01:22.804075 | orchestrator | 2026-02-15 03:01:22.804094 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-02-15 03:01:22.804113 | orchestrator | Sunday 15 February 2026 03:00:58 +0000 (0:00:01.520) 0:06:50.285 ******* 2026-02-15 03:01:22.804133 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:01:22.804152 | orchestrator | changed: [testbed-manager] 2026-02-15 03:01:22.804171 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:01:22.804191 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:01:22.804209 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:01:22.804228 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:01:22.804246 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:01:22.804265 | orchestrator | 2026-02-15 03:01:22.804283 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-02-15 03:01:22.804302 | orchestrator | Sunday 15 February 2026 03:00:59 +0000 (0:00:01.200) 0:06:51.486 ******* 2026-02-15 03:01:22.804321 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:01:22.804339 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:01:22.804357 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:01:22.804375 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:01:22.804394 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:01:22.804412 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:01:22.804431 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:01:22.804442 | orchestrator | 2026-02-15 03:01:22.804453 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-02-15 03:01:22.804464 | orchestrator | Sunday 15 February 2026 03:01:02 +0000 (0:00:02.349) 0:06:53.835 ******* 2026-02-15 03:01:22.804542 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:01:22.804555 | orchestrator | 2026-02-15 03:01:22.804567 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-02-15 03:01:22.804579 | orchestrator | Sunday 15 February 2026 03:01:02 +0000 (0:00:00.103) 0:06:53.939 ******* 2026-02-15 03:01:22.804589 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:22.804600 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:01:22.804611 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:01:22.804622 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:01:22.804633 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:01:22.804644 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:01:22.804655 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:01:22.804665 | orchestrator | 2026-02-15 03:01:22.804676 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-02-15 03:01:22.804688 | orchestrator | Sunday 15 February 2026 03:01:03 +0000 (0:00:01.081) 0:06:55.020 ******* 2026-02-15 03:01:22.804699 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:01:22.804726 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:01:22.804737 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:01:22.804748 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:01:22.804759 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:01:22.804769 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:01:22.804780 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:01:22.804790 | orchestrator | 2026-02-15 03:01:22.804801 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-02-15 03:01:22.804812 | orchestrator | Sunday 15 February 2026 03:01:04 +0000 (0:00:00.601) 0:06:55.622 ******* 2026-02-15 03:01:22.804824 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:01:22.804837 | orchestrator | 2026-02-15 03:01:22.804848 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-02-15 03:01:22.804859 | orchestrator | Sunday 15 February 2026 03:01:05 +0000 (0:00:01.218) 0:06:56.840 ******* 2026-02-15 03:01:22.804870 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:22.804886 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:22.804905 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:22.804922 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:22.804947 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:22.804970 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:22.804989 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:22.805008 | orchestrator | 2026-02-15 03:01:22.805026 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-02-15 03:01:22.805046 | orchestrator | Sunday 15 February 2026 03:01:06 +0000 (0:00:00.907) 0:06:57.748 ******* 2026-02-15 03:01:22.805058 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-02-15 03:01:22.805089 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-02-15 03:01:22.805101 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-02-15 03:01:22.805111 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-02-15 03:01:22.805122 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-02-15 03:01:22.805133 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-02-15 03:01:22.805143 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-02-15 03:01:22.805154 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-02-15 03:01:22.805165 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-02-15 03:01:22.805176 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-02-15 03:01:22.805186 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-02-15 03:01:22.805197 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-02-15 03:01:22.805218 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-02-15 03:01:22.805229 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-02-15 03:01:22.805239 | orchestrator | 2026-02-15 03:01:22.805250 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-02-15 03:01:22.805261 | orchestrator | Sunday 15 February 2026 03:01:08 +0000 (0:00:02.514) 0:07:00.263 ******* 2026-02-15 03:01:22.805271 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:01:22.805282 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:01:22.805293 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:01:22.805304 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:01:22.805314 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:01:22.805325 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:01:22.805335 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:01:22.805346 | orchestrator | 2026-02-15 03:01:22.805357 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-02-15 03:01:22.805367 | orchestrator | Sunday 15 February 2026 03:01:09 +0000 (0:00:00.796) 0:07:01.059 ******* 2026-02-15 03:01:22.805382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:01:22.805404 | orchestrator | 2026-02-15 03:01:22.805422 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-02-15 03:01:22.805439 | orchestrator | Sunday 15 February 2026 03:01:10 +0000 (0:00:00.916) 0:07:01.975 ******* 2026-02-15 03:01:22.805457 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:22.805501 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:22.805519 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:22.805538 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:22.805552 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:22.805563 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:22.805573 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:22.805584 | orchestrator | 2026-02-15 03:01:22.805595 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-02-15 03:01:22.805606 | orchestrator | Sunday 15 February 2026 03:01:11 +0000 (0:00:00.907) 0:07:02.883 ******* 2026-02-15 03:01:22.805616 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:22.805627 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:22.805638 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:22.805648 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:22.805659 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:22.805669 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:22.805680 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:22.805691 | orchestrator | 2026-02-15 03:01:22.805701 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-02-15 03:01:22.805712 | orchestrator | Sunday 15 February 2026 03:01:12 +0000 (0:00:01.099) 0:07:03.982 ******* 2026-02-15 03:01:22.805723 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:01:22.805734 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:01:22.805744 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:01:22.805755 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:01:22.805766 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:01:22.805776 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:01:22.805787 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:01:22.805797 | orchestrator | 2026-02-15 03:01:22.805809 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-02-15 03:01:22.805819 | orchestrator | Sunday 15 February 2026 03:01:12 +0000 (0:00:00.547) 0:07:04.530 ******* 2026-02-15 03:01:22.805830 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:22.805841 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:22.805851 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:22.805862 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:22.805872 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:22.805894 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:22.805904 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:22.805915 | orchestrator | 2026-02-15 03:01:22.805926 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-02-15 03:01:22.805937 | orchestrator | Sunday 15 February 2026 03:01:14 +0000 (0:00:01.478) 0:07:06.009 ******* 2026-02-15 03:01:22.805948 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:01:22.805958 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:01:22.805969 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:01:22.805979 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:01:22.805990 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:01:22.806000 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:01:22.806012 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:01:22.806106 | orchestrator | 2026-02-15 03:01:22.806127 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-02-15 03:01:22.806146 | orchestrator | Sunday 15 February 2026 03:01:14 +0000 (0:00:00.545) 0:07:06.555 ******* 2026-02-15 03:01:22.806210 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:22.806232 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:01:22.806251 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:01:22.806270 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:01:22.806290 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:01:22.806309 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:01:22.806337 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:01:56.672832 | orchestrator | 2026-02-15 03:01:56.672925 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-02-15 03:01:56.672934 | orchestrator | Sunday 15 February 2026 03:01:22 +0000 (0:00:07.813) 0:07:14.368 ******* 2026-02-15 03:01:56.672938 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.672943 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:01:56.672948 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:01:56.672952 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:01:56.672956 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:01:56.672960 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:01:56.672964 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:01:56.672968 | orchestrator | 2026-02-15 03:01:56.672972 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-02-15 03:01:56.672976 | orchestrator | Sunday 15 February 2026 03:01:24 +0000 (0:00:01.692) 0:07:16.060 ******* 2026-02-15 03:01:56.672980 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.672984 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:01:56.672988 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:01:56.672994 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:01:56.673000 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:01:56.673005 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:01:56.673009 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:01:56.673013 | orchestrator | 2026-02-15 03:01:56.673017 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-02-15 03:01:56.673021 | orchestrator | Sunday 15 February 2026 03:01:26 +0000 (0:00:01.888) 0:07:17.948 ******* 2026-02-15 03:01:56.673028 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.673034 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:01:56.673039 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:01:56.673043 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:01:56.673046 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:01:56.673050 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:01:56.673054 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:01:56.673057 | orchestrator | 2026-02-15 03:01:56.673061 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-15 03:01:56.673065 | orchestrator | Sunday 15 February 2026 03:01:28 +0000 (0:00:01.754) 0:07:19.703 ******* 2026-02-15 03:01:56.673069 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.673072 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:56.673076 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:56.673095 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:56.673099 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:56.673103 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:56.673109 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:56.673116 | orchestrator | 2026-02-15 03:01:56.673122 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-15 03:01:56.673128 | orchestrator | Sunday 15 February 2026 03:01:29 +0000 (0:00:00.910) 0:07:20.614 ******* 2026-02-15 03:01:56.673132 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:01:56.673136 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:01:56.673140 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:01:56.673144 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:01:56.673147 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:01:56.673151 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:01:56.673156 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:01:56.673162 | orchestrator | 2026-02-15 03:01:56.673169 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-02-15 03:01:56.673173 | orchestrator | Sunday 15 February 2026 03:01:30 +0000 (0:00:01.127) 0:07:21.741 ******* 2026-02-15 03:01:56.673176 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:01:56.673180 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:01:56.673184 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:01:56.673187 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:01:56.673191 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:01:56.673195 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:01:56.673198 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:01:56.673202 | orchestrator | 2026-02-15 03:01:56.673206 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-02-15 03:01:56.673209 | orchestrator | Sunday 15 February 2026 03:01:30 +0000 (0:00:00.604) 0:07:22.346 ******* 2026-02-15 03:01:56.673213 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.673230 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:56.673234 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:56.673238 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:56.673241 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:56.673245 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:56.673252 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:56.673258 | orchestrator | 2026-02-15 03:01:56.673266 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-02-15 03:01:56.673269 | orchestrator | Sunday 15 February 2026 03:01:31 +0000 (0:00:00.568) 0:07:22.914 ******* 2026-02-15 03:01:56.673275 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.673282 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:56.673287 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:56.673291 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:56.673294 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:56.673298 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:56.673302 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:56.673306 | orchestrator | 2026-02-15 03:01:56.673310 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-02-15 03:01:56.673313 | orchestrator | Sunday 15 February 2026 03:01:32 +0000 (0:00:00.787) 0:07:23.701 ******* 2026-02-15 03:01:56.673317 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.673321 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:56.673325 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:56.673328 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:56.673332 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:56.673336 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:56.673339 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:56.673343 | orchestrator | 2026-02-15 03:01:56.673347 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-02-15 03:01:56.673351 | orchestrator | Sunday 15 February 2026 03:01:32 +0000 (0:00:00.603) 0:07:24.305 ******* 2026-02-15 03:01:56.673354 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.673358 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:56.673366 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:56.673370 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:56.673374 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:56.673378 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:56.673381 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:56.673385 | orchestrator | 2026-02-15 03:01:56.673399 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-02-15 03:01:56.673404 | orchestrator | Sunday 15 February 2026 03:01:38 +0000 (0:00:05.728) 0:07:30.033 ******* 2026-02-15 03:01:56.673408 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:01:56.673413 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:01:56.673418 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:01:56.673422 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:01:56.673427 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:01:56.673431 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:01:56.673435 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:01:56.673440 | orchestrator | 2026-02-15 03:01:56.673444 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-02-15 03:01:56.673448 | orchestrator | Sunday 15 February 2026 03:01:39 +0000 (0:00:00.635) 0:07:30.668 ******* 2026-02-15 03:01:56.673454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:01:56.673461 | orchestrator | 2026-02-15 03:01:56.673465 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-02-15 03:01:56.673469 | orchestrator | Sunday 15 February 2026 03:01:40 +0000 (0:00:01.176) 0:07:31.845 ******* 2026-02-15 03:01:56.673473 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:56.673476 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.673480 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:56.673484 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:56.673488 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:56.673494 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:56.673500 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:56.673506 | orchestrator | 2026-02-15 03:01:56.673510 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-02-15 03:01:56.673514 | orchestrator | Sunday 15 February 2026 03:01:42 +0000 (0:00:01.860) 0:07:33.706 ******* 2026-02-15 03:01:56.673518 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.673521 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:56.673525 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:56.673529 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:56.673532 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:56.673536 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:56.673543 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:56.673548 | orchestrator | 2026-02-15 03:01:56.673582 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-02-15 03:01:56.673588 | orchestrator | Sunday 15 February 2026 03:01:43 +0000 (0:00:01.147) 0:07:34.853 ******* 2026-02-15 03:01:56.673592 | orchestrator | ok: [testbed-manager] 2026-02-15 03:01:56.673596 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:01:56.673600 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:01:56.673603 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:01:56.673607 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:01:56.673611 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:01:56.673614 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:01:56.673618 | orchestrator | 2026-02-15 03:01:56.673622 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-02-15 03:01:56.673625 | orchestrator | Sunday 15 February 2026 03:01:44 +0000 (0:00:00.908) 0:07:35.762 ******* 2026-02-15 03:01:56.673629 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-15 03:01:56.673635 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-15 03:01:56.673642 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-15 03:01:56.673646 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-15 03:01:56.673653 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-15 03:01:56.673657 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-15 03:01:56.673660 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-02-15 03:01:56.673664 | orchestrator | 2026-02-15 03:01:56.673668 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-02-15 03:01:56.673674 | orchestrator | Sunday 15 February 2026 03:01:46 +0000 (0:00:02.000) 0:07:37.763 ******* 2026-02-15 03:01:56.673681 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:01:56.673686 | orchestrator | 2026-02-15 03:01:56.673691 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-02-15 03:01:56.673694 | orchestrator | Sunday 15 February 2026 03:01:47 +0000 (0:00:00.875) 0:07:38.638 ******* 2026-02-15 03:01:56.673698 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:01:56.673704 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:01:56.673710 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:01:56.673716 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:01:56.673722 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:01:56.673728 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:01:56.673734 | orchestrator | changed: [testbed-manager] 2026-02-15 03:01:56.673740 | orchestrator | 2026-02-15 03:01:56.673750 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-02-15 03:02:28.315718 | orchestrator | Sunday 15 February 2026 03:01:56 +0000 (0:00:09.597) 0:07:48.236 ******* 2026-02-15 03:02:28.315838 | orchestrator | ok: [testbed-manager] 2026-02-15 03:02:28.315863 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:02:28.315883 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:02:28.315903 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:02:28.315921 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:02:28.315940 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:02:28.315959 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:02:28.315979 | orchestrator | 2026-02-15 03:02:28.315999 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-02-15 03:02:28.316018 | orchestrator | Sunday 15 February 2026 03:01:58 +0000 (0:00:02.132) 0:07:50.368 ******* 2026-02-15 03:02:28.316038 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:02:28.316057 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:02:28.316076 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:02:28.316096 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:02:28.316116 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:02:28.316135 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:02:28.316154 | orchestrator | 2026-02-15 03:02:28.316173 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-02-15 03:02:28.316193 | orchestrator | Sunday 15 February 2026 03:02:00 +0000 (0:00:01.310) 0:07:51.679 ******* 2026-02-15 03:02:28.316215 | orchestrator | changed: [testbed-manager] 2026-02-15 03:02:28.316236 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:02:28.316256 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:02:28.316275 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:02:28.316294 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:02:28.316347 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:02:28.316369 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:02:28.316389 | orchestrator | 2026-02-15 03:02:28.316410 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-02-15 03:02:28.316429 | orchestrator | 2026-02-15 03:02:28.316442 | orchestrator | TASK [Include hardening role] ************************************************** 2026-02-15 03:02:28.316455 | orchestrator | Sunday 15 February 2026 03:02:01 +0000 (0:00:01.248) 0:07:52.927 ******* 2026-02-15 03:02:28.316468 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:02:28.316481 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:02:28.316495 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:02:28.316507 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:02:28.316520 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:02:28.316532 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:02:28.316545 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:02:28.316558 | orchestrator | 2026-02-15 03:02:28.316569 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-02-15 03:02:28.316580 | orchestrator | 2026-02-15 03:02:28.316591 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-02-15 03:02:28.316602 | orchestrator | Sunday 15 February 2026 03:02:02 +0000 (0:00:00.809) 0:07:53.737 ******* 2026-02-15 03:02:28.316613 | orchestrator | changed: [testbed-manager] 2026-02-15 03:02:28.316623 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:02:28.316684 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:02:28.316696 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:02:28.316707 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:02:28.316718 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:02:28.316728 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:02:28.316739 | orchestrator | 2026-02-15 03:02:28.316749 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-02-15 03:02:28.316760 | orchestrator | Sunday 15 February 2026 03:02:03 +0000 (0:00:01.369) 0:07:55.107 ******* 2026-02-15 03:02:28.316771 | orchestrator | ok: [testbed-manager] 2026-02-15 03:02:28.316782 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:02:28.316793 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:02:28.316804 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:02:28.316814 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:02:28.316825 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:02:28.316836 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:02:28.316846 | orchestrator | 2026-02-15 03:02:28.316857 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-02-15 03:02:28.316868 | orchestrator | Sunday 15 February 2026 03:02:04 +0000 (0:00:01.464) 0:07:56.572 ******* 2026-02-15 03:02:28.316879 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:02:28.316890 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:02:28.316900 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:02:28.316911 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:02:28.316922 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:02:28.316948 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:02:28.316960 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:02:28.316971 | orchestrator | 2026-02-15 03:02:28.316981 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-02-15 03:02:28.316992 | orchestrator | Sunday 15 February 2026 03:02:05 +0000 (0:00:00.541) 0:07:57.114 ******* 2026-02-15 03:02:28.317004 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:02:28.317017 | orchestrator | 2026-02-15 03:02:28.317028 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-02-15 03:02:28.317039 | orchestrator | Sunday 15 February 2026 03:02:06 +0000 (0:00:01.116) 0:07:58.230 ******* 2026-02-15 03:02:28.317052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:02:28.317075 | orchestrator | 2026-02-15 03:02:28.317086 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-02-15 03:02:28.317097 | orchestrator | Sunday 15 February 2026 03:02:07 +0000 (0:00:00.859) 0:07:59.089 ******* 2026-02-15 03:02:28.317107 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:02:28.317118 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:02:28.317129 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:02:28.317139 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:02:28.317150 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:02:28.317161 | orchestrator | changed: [testbed-manager] 2026-02-15 03:02:28.317171 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:02:28.317182 | orchestrator | 2026-02-15 03:02:28.317215 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-02-15 03:02:28.317227 | orchestrator | Sunday 15 February 2026 03:02:16 +0000 (0:00:08.762) 0:08:07.852 ******* 2026-02-15 03:02:28.317237 | orchestrator | changed: [testbed-manager] 2026-02-15 03:02:28.317248 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:02:28.317259 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:02:28.317270 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:02:28.317280 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:02:28.317291 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:02:28.317302 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:02:28.317312 | orchestrator | 2026-02-15 03:02:28.317323 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-02-15 03:02:28.317334 | orchestrator | Sunday 15 February 2026 03:02:17 +0000 (0:00:00.897) 0:08:08.749 ******* 2026-02-15 03:02:28.317345 | orchestrator | changed: [testbed-manager] 2026-02-15 03:02:28.317355 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:02:28.317380 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:02:28.317391 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:02:28.317402 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:02:28.317412 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:02:28.317423 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:02:28.317433 | orchestrator | 2026-02-15 03:02:28.317444 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-02-15 03:02:28.317455 | orchestrator | Sunday 15 February 2026 03:02:18 +0000 (0:00:01.399) 0:08:10.149 ******* 2026-02-15 03:02:28.317466 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:02:28.317477 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:02:28.317487 | orchestrator | changed: [testbed-manager] 2026-02-15 03:02:28.317498 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:02:28.317508 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:02:28.317519 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:02:28.317530 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:02:28.317540 | orchestrator | 2026-02-15 03:02:28.317551 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-02-15 03:02:28.317562 | orchestrator | Sunday 15 February 2026 03:02:20 +0000 (0:00:02.219) 0:08:12.369 ******* 2026-02-15 03:02:28.317572 | orchestrator | changed: [testbed-manager] 2026-02-15 03:02:28.317583 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:02:28.317616 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:02:28.317627 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:02:28.317662 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:02:28.317673 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:02:28.317684 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:02:28.317695 | orchestrator | 2026-02-15 03:02:28.317706 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-02-15 03:02:28.317717 | orchestrator | Sunday 15 February 2026 03:02:22 +0000 (0:00:01.231) 0:08:13.600 ******* 2026-02-15 03:02:28.317727 | orchestrator | changed: [testbed-manager] 2026-02-15 03:02:28.317738 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:02:28.317756 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:02:28.317767 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:02:28.317778 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:02:28.317789 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:02:28.317799 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:02:28.317810 | orchestrator | 2026-02-15 03:02:28.317821 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-02-15 03:02:28.317832 | orchestrator | 2026-02-15 03:02:28.317843 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-02-15 03:02:28.317854 | orchestrator | Sunday 15 February 2026 03:02:23 +0000 (0:00:01.190) 0:08:14.790 ******* 2026-02-15 03:02:28.317865 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:02:28.317876 | orchestrator | 2026-02-15 03:02:28.317886 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-15 03:02:28.317897 | orchestrator | Sunday 15 February 2026 03:02:24 +0000 (0:00:00.842) 0:08:15.633 ******* 2026-02-15 03:02:28.317908 | orchestrator | ok: [testbed-manager] 2026-02-15 03:02:28.317919 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:02:28.317929 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:02:28.317940 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:02:28.317951 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:02:28.317962 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:02:28.317978 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:02:28.317989 | orchestrator | 2026-02-15 03:02:28.318000 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-15 03:02:28.318011 | orchestrator | Sunday 15 February 2026 03:02:25 +0000 (0:00:01.084) 0:08:16.718 ******* 2026-02-15 03:02:28.318092 | orchestrator | changed: [testbed-manager] 2026-02-15 03:02:28.318104 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:02:28.318115 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:02:28.318126 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:02:28.318137 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:02:28.318148 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:02:28.318159 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:02:28.318169 | orchestrator | 2026-02-15 03:02:28.318180 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-02-15 03:02:28.318191 | orchestrator | Sunday 15 February 2026 03:02:26 +0000 (0:00:01.236) 0:08:17.954 ******* 2026-02-15 03:02:28.318203 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:02:28.318214 | orchestrator | 2026-02-15 03:02:28.318224 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-02-15 03:02:28.318235 | orchestrator | Sunday 15 February 2026 03:02:27 +0000 (0:00:01.085) 0:08:19.040 ******* 2026-02-15 03:02:28.318246 | orchestrator | ok: [testbed-manager] 2026-02-15 03:02:28.318257 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:02:28.318268 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:02:28.318279 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:02:28.318290 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:02:28.318300 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:02:28.318311 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:02:28.318322 | orchestrator | 2026-02-15 03:02:28.318342 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-02-15 03:02:29.989003 | orchestrator | Sunday 15 February 2026 03:02:28 +0000 (0:00:00.841) 0:08:19.882 ******* 2026-02-15 03:02:29.989134 | orchestrator | changed: [testbed-manager] 2026-02-15 03:02:29.989153 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:02:29.989165 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:02:29.989176 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:02:29.989187 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:02:29.989198 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:02:29.989209 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:02:29.989250 | orchestrator | 2026-02-15 03:02:29.989262 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:02:29.989274 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-15 03:02:29.989287 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-15 03:02:29.989298 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-15 03:02:29.989309 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-15 03:02:29.989319 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-02-15 03:02:29.989330 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-15 03:02:29.989341 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-02-15 03:02:29.989351 | orchestrator | 2026-02-15 03:02:29.989362 | orchestrator | 2026-02-15 03:02:29.989373 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:02:29.989384 | orchestrator | Sunday 15 February 2026 03:02:29 +0000 (0:00:01.151) 0:08:21.034 ******* 2026-02-15 03:02:29.989394 | orchestrator | =============================================================================== 2026-02-15 03:02:29.989405 | orchestrator | osism.commons.packages : Install required packages --------------------- 80.71s 2026-02-15 03:02:29.989415 | orchestrator | osism.commons.packages : Download required packages -------------------- 37.85s 2026-02-15 03:02:29.989426 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.08s 2026-02-15 03:02:29.989436 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.12s 2026-02-15 03:02:29.989447 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 14.43s 2026-02-15 03:02:29.989459 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.66s 2026-02-15 03:02:29.989469 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.55s 2026-02-15 03:02:29.989480 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.60s 2026-02-15 03:02:29.989491 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.35s 2026-02-15 03:02:29.989502 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.92s 2026-02-15 03:02:29.989512 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.76s 2026-02-15 03:02:29.989524 | orchestrator | osism.services.docker : Add repository ---------------------------------- 8.45s 2026-02-15 03:02:29.989538 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.44s 2026-02-15 03:02:29.989566 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.41s 2026-02-15 03:02:29.989579 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.33s 2026-02-15 03:02:29.989591 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.81s 2026-02-15 03:02:29.989603 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 7.09s 2026-02-15 03:02:29.989615 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.70s 2026-02-15 03:02:29.989628 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.90s 2026-02-15 03:02:29.989671 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.73s 2026-02-15 03:02:30.332817 | orchestrator | + osism apply fail2ban 2026-02-15 03:02:43.422718 | orchestrator | 2026-02-15 03:02:43 | INFO  | Task f1b999af-3935-4774-9fd8-e78ca5e86d72 (fail2ban) was prepared for execution. 2026-02-15 03:02:43.422846 | orchestrator | 2026-02-15 03:02:43 | INFO  | It takes a moment until task f1b999af-3935-4774-9fd8-e78ca5e86d72 (fail2ban) has been started and output is visible here. 2026-02-15 03:03:06.856126 | orchestrator | 2026-02-15 03:03:06.856241 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-02-15 03:03:06.856257 | orchestrator | 2026-02-15 03:03:06.856269 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-02-15 03:03:06.856281 | orchestrator | Sunday 15 February 2026 03:02:48 +0000 (0:00:00.332) 0:00:00.332 ******* 2026-02-15 03:03:06.856294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:03:06.856308 | orchestrator | 2026-02-15 03:03:06.856319 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-02-15 03:03:06.856330 | orchestrator | Sunday 15 February 2026 03:02:49 +0000 (0:00:01.226) 0:00:01.559 ******* 2026-02-15 03:03:06.856341 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:03:06.856383 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:03:06.856395 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:03:06.856406 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:03:06.856417 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:03:06.856427 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:03:06.856438 | orchestrator | changed: [testbed-manager] 2026-02-15 03:03:06.856450 | orchestrator | 2026-02-15 03:03:06.856462 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-02-15 03:03:06.856473 | orchestrator | Sunday 15 February 2026 03:03:01 +0000 (0:00:11.895) 0:00:13.454 ******* 2026-02-15 03:03:06.856484 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:03:06.856494 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:03:06.856505 | orchestrator | changed: [testbed-manager] 2026-02-15 03:03:06.856516 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:03:06.856527 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:03:06.856538 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:03:06.856548 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:03:06.856559 | orchestrator | 2026-02-15 03:03:06.856570 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-02-15 03:03:06.856581 | orchestrator | Sunday 15 February 2026 03:03:03 +0000 (0:00:01.532) 0:00:14.986 ******* 2026-02-15 03:03:06.856592 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:03:06.856604 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:03:06.856615 | orchestrator | ok: [testbed-manager] 2026-02-15 03:03:06.856626 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:03:06.856636 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:03:06.856647 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:03:06.856658 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:03:06.856672 | orchestrator | 2026-02-15 03:03:06.856685 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-02-15 03:03:06.856698 | orchestrator | Sunday 15 February 2026 03:03:04 +0000 (0:00:01.519) 0:00:16.506 ******* 2026-02-15 03:03:06.856711 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:03:06.856751 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:03:06.856765 | orchestrator | changed: [testbed-manager] 2026-02-15 03:03:06.856778 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:03:06.856790 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:03:06.856802 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:03:06.856813 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:03:06.856824 | orchestrator | 2026-02-15 03:03:06.856834 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:03:06.856846 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:03:06.856885 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:03:06.856897 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:03:06.856908 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:03:06.856919 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:03:06.856930 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:03:06.856941 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:03:06.856951 | orchestrator | 2026-02-15 03:03:06.856962 | orchestrator | 2026-02-15 03:03:06.856973 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:03:06.856984 | orchestrator | Sunday 15 February 2026 03:03:06 +0000 (0:00:01.670) 0:00:18.177 ******* 2026-02-15 03:03:06.856995 | orchestrator | =============================================================================== 2026-02-15 03:03:06.857005 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.90s 2026-02-15 03:03:06.857016 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.67s 2026-02-15 03:03:06.857027 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.53s 2026-02-15 03:03:06.857038 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.52s 2026-02-15 03:03:06.857048 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.23s 2026-02-15 03:03:07.210334 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-15 03:03:07.210405 | orchestrator | + osism apply network 2026-02-15 03:03:19.391711 | orchestrator | 2026-02-15 03:03:19 | INFO  | Task cf6aa7fa-e3cf-450f-8ff8-683c124f6e79 (network) was prepared for execution. 2026-02-15 03:03:19.391852 | orchestrator | 2026-02-15 03:03:19 | INFO  | It takes a moment until task cf6aa7fa-e3cf-450f-8ff8-683c124f6e79 (network) has been started and output is visible here. 2026-02-15 03:03:51.861757 | orchestrator | 2026-02-15 03:03:51.861890 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-02-15 03:03:51.861907 | orchestrator | 2026-02-15 03:03:51.861919 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-02-15 03:03:51.861929 | orchestrator | Sunday 15 February 2026 03:03:24 +0000 (0:00:00.300) 0:00:00.300 ******* 2026-02-15 03:03:51.861939 | orchestrator | ok: [testbed-manager] 2026-02-15 03:03:51.861950 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:03:51.861960 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:03:51.861970 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:03:51.861979 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:03:51.861989 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:03:51.861998 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:03:51.862007 | orchestrator | 2026-02-15 03:03:51.862072 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-02-15 03:03:51.862082 | orchestrator | Sunday 15 February 2026 03:03:25 +0000 (0:00:00.817) 0:00:01.118 ******* 2026-02-15 03:03:51.862094 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:03:51.862106 | orchestrator | 2026-02-15 03:03:51.862116 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-02-15 03:03:51.862148 | orchestrator | Sunday 15 February 2026 03:03:26 +0000 (0:00:01.384) 0:00:02.502 ******* 2026-02-15 03:03:51.862158 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:03:51.862167 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:03:51.862177 | orchestrator | ok: [testbed-manager] 2026-02-15 03:03:51.862186 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:03:51.862195 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:03:51.862204 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:03:51.862213 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:03:51.862223 | orchestrator | 2026-02-15 03:03:51.862232 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-02-15 03:03:51.862242 | orchestrator | Sunday 15 February 2026 03:03:28 +0000 (0:00:02.235) 0:00:04.738 ******* 2026-02-15 03:03:51.862251 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:03:51.862261 | orchestrator | ok: [testbed-manager] 2026-02-15 03:03:51.862271 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:03:51.862280 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:03:51.862290 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:03:51.862299 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:03:51.862310 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:03:51.862321 | orchestrator | 2026-02-15 03:03:51.862333 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-02-15 03:03:51.862344 | orchestrator | Sunday 15 February 2026 03:03:30 +0000 (0:00:01.922) 0:00:06.660 ******* 2026-02-15 03:03:51.862355 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-02-15 03:03:51.862367 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-02-15 03:03:51.862379 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-02-15 03:03:51.862391 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-02-15 03:03:51.862400 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-02-15 03:03:51.862409 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-02-15 03:03:51.862418 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-02-15 03:03:51.862428 | orchestrator | 2026-02-15 03:03:51.862454 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-02-15 03:03:51.862464 | orchestrator | Sunday 15 February 2026 03:03:31 +0000 (0:00:01.042) 0:00:07.703 ******* 2026-02-15 03:03:51.862474 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 03:03:51.862484 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-15 03:03:51.862493 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 03:03:51.862502 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 03:03:51.862512 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-15 03:03:51.862521 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-15 03:03:51.862530 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-15 03:03:51.862540 | orchestrator | 2026-02-15 03:03:51.862549 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-02-15 03:03:51.862558 | orchestrator | Sunday 15 February 2026 03:03:35 +0000 (0:00:03.717) 0:00:11.421 ******* 2026-02-15 03:03:51.862568 | orchestrator | changed: [testbed-manager] 2026-02-15 03:03:51.862577 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:03:51.862587 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:03:51.862596 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:03:51.862610 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:03:51.862619 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:03:51.862629 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:03:51.862638 | orchestrator | 2026-02-15 03:03:51.862647 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-02-15 03:03:51.862657 | orchestrator | Sunday 15 February 2026 03:03:37 +0000 (0:00:01.749) 0:00:13.170 ******* 2026-02-15 03:03:51.862666 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 03:03:51.862675 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 03:03:51.862685 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-15 03:03:51.862694 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-15 03:03:51.862710 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 03:03:51.862720 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-15 03:03:51.862729 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-15 03:03:51.862739 | orchestrator | 2026-02-15 03:03:51.862748 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-02-15 03:03:51.862758 | orchestrator | Sunday 15 February 2026 03:03:39 +0000 (0:00:02.029) 0:00:15.200 ******* 2026-02-15 03:03:51.862768 | orchestrator | ok: [testbed-manager] 2026-02-15 03:03:51.862777 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:03:51.862787 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:03:51.862796 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:03:51.862806 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:03:51.862815 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:03:51.862825 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:03:51.862834 | orchestrator | 2026-02-15 03:03:51.862898 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-02-15 03:03:51.862926 | orchestrator | Sunday 15 February 2026 03:03:40 +0000 (0:00:01.212) 0:00:16.412 ******* 2026-02-15 03:03:51.862937 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:03:51.862947 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:03:51.862957 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:03:51.862966 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:03:51.862976 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:03:51.862985 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:03:51.862995 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:03:51.863005 | orchestrator | 2026-02-15 03:03:51.863014 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-02-15 03:03:51.863024 | orchestrator | Sunday 15 February 2026 03:03:41 +0000 (0:00:00.718) 0:00:17.131 ******* 2026-02-15 03:03:51.863035 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:03:51.863052 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:03:51.863069 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:03:51.863085 | orchestrator | ok: [testbed-manager] 2026-02-15 03:03:51.863101 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:03:51.863118 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:03:51.863134 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:03:51.863151 | orchestrator | 2026-02-15 03:03:51.863168 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-02-15 03:03:51.863186 | orchestrator | Sunday 15 February 2026 03:03:43 +0000 (0:00:02.414) 0:00:19.545 ******* 2026-02-15 03:03:51.863202 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:03:51.863218 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:03:51.863235 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:03:51.863252 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:03:51.863263 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:03:51.863273 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:03:51.863289 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-02-15 03:03:51.863307 | orchestrator | 2026-02-15 03:03:51.863324 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-02-15 03:03:51.863335 | orchestrator | Sunday 15 February 2026 03:03:44 +0000 (0:00:01.064) 0:00:20.610 ******* 2026-02-15 03:03:51.863345 | orchestrator | ok: [testbed-manager] 2026-02-15 03:03:51.863354 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:03:51.863364 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:03:51.863373 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:03:51.863383 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:03:51.863393 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:03:51.863402 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:03:51.863412 | orchestrator | 2026-02-15 03:03:51.863422 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-02-15 03:03:51.863431 | orchestrator | Sunday 15 February 2026 03:03:46 +0000 (0:00:01.838) 0:00:22.448 ******* 2026-02-15 03:03:51.863442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:03:51.863462 | orchestrator | 2026-02-15 03:03:51.863472 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-15 03:03:51.863482 | orchestrator | Sunday 15 February 2026 03:03:47 +0000 (0:00:01.402) 0:00:23.851 ******* 2026-02-15 03:03:51.863491 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:03:51.863501 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:03:51.863511 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:03:51.863520 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:03:51.863530 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:03:51.863540 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:03:51.863550 | orchestrator | ok: [testbed-manager] 2026-02-15 03:03:51.863560 | orchestrator | 2026-02-15 03:03:51.863569 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-02-15 03:03:51.863580 | orchestrator | Sunday 15 February 2026 03:03:49 +0000 (0:00:01.611) 0:00:25.462 ******* 2026-02-15 03:03:51.863591 | orchestrator | ok: [testbed-manager] 2026-02-15 03:03:51.863601 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:03:51.863612 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:03:51.863623 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:03:51.863633 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:03:51.863644 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:03:51.863655 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:03:51.863666 | orchestrator | 2026-02-15 03:03:51.863677 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-15 03:03:51.863688 | orchestrator | Sunday 15 February 2026 03:03:50 +0000 (0:00:01.035) 0:00:26.498 ******* 2026-02-15 03:03:51.863706 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-02-15 03:03:51.863717 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-02-15 03:03:51.863728 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-02-15 03:03:51.863739 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-15 03:03:51.863750 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-02-15 03:03:51.863761 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-15 03:03:51.863773 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-02-15 03:03:51.863783 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-15 03:03:51.863794 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-02-15 03:03:51.863805 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-15 03:03:51.863816 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-02-15 03:03:51.863827 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-15 03:03:51.863885 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-15 03:03:51.863897 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-02-15 03:03:51.863909 | orchestrator | 2026-02-15 03:03:51.863931 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-02-15 03:04:10.966415 | orchestrator | Sunday 15 February 2026 03:03:51 +0000 (0:00:01.446) 0:00:27.944 ******* 2026-02-15 03:04:10.966527 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:04:10.966543 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:04:10.966555 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:04:10.966567 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:04:10.966578 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:04:10.966589 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:04:10.966599 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:04:10.966610 | orchestrator | 2026-02-15 03:04:10.966645 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-02-15 03:04:10.966657 | orchestrator | Sunday 15 February 2026 03:03:52 +0000 (0:00:00.802) 0:00:28.747 ******* 2026-02-15 03:04:10.966669 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2026-02-15 03:04:10.966683 | orchestrator | 2026-02-15 03:04:10.966694 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-02-15 03:04:10.966704 | orchestrator | Sunday 15 February 2026 03:03:57 +0000 (0:00:05.226) 0:00:33.974 ******* 2026-02-15 03:04:10.966717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.966729 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.966742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.966753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.966764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.966775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.966786 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.966812 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.966831 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.966849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.966867 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.966948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.966984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.967003 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.967022 | orchestrator | 2026-02-15 03:04:10.967042 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-02-15 03:04:10.967062 | orchestrator | Sunday 15 February 2026 03:04:04 +0000 (0:00:06.759) 0:00:40.733 ******* 2026-02-15 03:04:10.967082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.967103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.967122 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.967142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.967155 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.967169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.967182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.967195 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.967215 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-02-15 03:04:10.967228 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.967241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.967263 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:10.967290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:18.107824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-02-15 03:04:18.107984 | orchestrator | 2026-02-15 03:04:18.108000 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-02-15 03:04:18.108009 | orchestrator | Sunday 15 February 2026 03:04:10 +0000 (0:00:06.314) 0:00:47.047 ******* 2026-02-15 03:04:18.108018 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:04:18.108025 | orchestrator | 2026-02-15 03:04:18.108032 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-02-15 03:04:18.108038 | orchestrator | Sunday 15 February 2026 03:04:12 +0000 (0:00:01.428) 0:00:48.475 ******* 2026-02-15 03:04:18.108045 | orchestrator | ok: [testbed-manager] 2026-02-15 03:04:18.108052 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:04:18.108058 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:04:18.108064 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:04:18.108070 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:04:18.108076 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:04:18.108082 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:04:18.108088 | orchestrator | 2026-02-15 03:04:18.108094 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-02-15 03:04:18.108101 | orchestrator | Sunday 15 February 2026 03:04:13 +0000 (0:00:01.287) 0:00:49.763 ******* 2026-02-15 03:04:18.108107 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-15 03:04:18.108114 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-15 03:04:18.108120 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-15 03:04:18.108126 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-15 03:04:18.108132 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-15 03:04:18.108138 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-15 03:04:18.108144 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-15 03:04:18.108150 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:04:18.108158 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-15 03:04:18.108164 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-15 03:04:18.108170 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-15 03:04:18.108176 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-15 03:04:18.108182 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-15 03:04:18.108188 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:04:18.108214 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-15 03:04:18.108220 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-15 03:04:18.108227 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-15 03:04:18.108233 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-15 03:04:18.108238 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:04:18.108258 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-15 03:04:18.108265 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-15 03:04:18.108271 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:04:18.108277 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-15 03:04:18.108283 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-15 03:04:18.108289 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-15 03:04:18.108295 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-15 03:04:18.108301 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-15 03:04:18.108308 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-15 03:04:18.108314 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:04:18.108320 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:04:18.108326 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-02-15 03:04:18.108332 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-02-15 03:04:18.108338 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-02-15 03:04:18.108344 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-02-15 03:04:18.108350 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:04:18.108356 | orchestrator | 2026-02-15 03:04:18.108362 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-02-15 03:04:18.108382 | orchestrator | Sunday 15 February 2026 03:04:16 +0000 (0:00:02.510) 0:00:52.273 ******* 2026-02-15 03:04:18.108389 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:04:18.108395 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:04:18.108401 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:04:18.108407 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:04:18.108413 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:04:18.108419 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:04:18.108425 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:04:18.108431 | orchestrator | 2026-02-15 03:04:18.108437 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-02-15 03:04:18.108443 | orchestrator | Sunday 15 February 2026 03:04:16 +0000 (0:00:00.702) 0:00:52.975 ******* 2026-02-15 03:04:18.108449 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:04:18.108455 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:04:18.108461 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:04:18.108468 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:04:18.108473 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:04:18.108479 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:04:18.108485 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:04:18.108491 | orchestrator | 2026-02-15 03:04:18.108497 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:04:18.108504 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 03:04:18.108512 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 03:04:18.108523 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 03:04:18.108530 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 03:04:18.108536 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 03:04:18.108542 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 03:04:18.108548 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 03:04:18.108554 | orchestrator | 2026-02-15 03:04:18.108560 | orchestrator | 2026-02-15 03:04:18.108566 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:04:18.108572 | orchestrator | Sunday 15 February 2026 03:04:17 +0000 (0:00:00.773) 0:00:53.749 ******* 2026-02-15 03:04:18.108578 | orchestrator | =============================================================================== 2026-02-15 03:04:18.108584 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.76s 2026-02-15 03:04:18.108590 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.31s 2026-02-15 03:04:18.108596 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 5.23s 2026-02-15 03:04:18.108602 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.72s 2026-02-15 03:04:18.108608 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.51s 2026-02-15 03:04:18.108614 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.41s 2026-02-15 03:04:18.108620 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.24s 2026-02-15 03:04:18.108630 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 2.03s 2026-02-15 03:04:18.108636 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.92s 2026-02-15 03:04:18.108642 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.84s 2026-02-15 03:04:18.108648 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.75s 2026-02-15 03:04:18.108654 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.61s 2026-02-15 03:04:18.108660 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.45s 2026-02-15 03:04:18.108666 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.43s 2026-02-15 03:04:18.108672 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.40s 2026-02-15 03:04:18.108678 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.38s 2026-02-15 03:04:18.108684 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.29s 2026-02-15 03:04:18.108690 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.21s 2026-02-15 03:04:18.108696 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.06s 2026-02-15 03:04:18.108703 | orchestrator | osism.commons.network : Create required directories --------------------- 1.04s 2026-02-15 03:04:18.476014 | orchestrator | + osism apply wireguard 2026-02-15 03:04:30.681983 | orchestrator | 2026-02-15 03:04:30 | INFO  | Task 80bdcfe9-8862-4631-9615-268e34f682db (wireguard) was prepared for execution. 2026-02-15 03:04:30.682125 | orchestrator | 2026-02-15 03:04:30 | INFO  | It takes a moment until task 80bdcfe9-8862-4631-9615-268e34f682db (wireguard) has been started and output is visible here. 2026-02-15 03:04:53.651826 | orchestrator | 2026-02-15 03:04:53.651984 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-02-15 03:04:53.652042 | orchestrator | 2026-02-15 03:04:53.652050 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-02-15 03:04:53.652056 | orchestrator | Sunday 15 February 2026 03:04:35 +0000 (0:00:00.282) 0:00:00.282 ******* 2026-02-15 03:04:53.652063 | orchestrator | ok: [testbed-manager] 2026-02-15 03:04:53.652070 | orchestrator | 2026-02-15 03:04:53.652076 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-02-15 03:04:53.652082 | orchestrator | Sunday 15 February 2026 03:04:37 +0000 (0:00:01.680) 0:00:01.963 ******* 2026-02-15 03:04:53.652089 | orchestrator | changed: [testbed-manager] 2026-02-15 03:04:53.652100 | orchestrator | 2026-02-15 03:04:53.652107 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-02-15 03:04:53.652113 | orchestrator | Sunday 15 February 2026 03:04:45 +0000 (0:00:07.938) 0:00:09.902 ******* 2026-02-15 03:04:53.652119 | orchestrator | changed: [testbed-manager] 2026-02-15 03:04:53.652126 | orchestrator | 2026-02-15 03:04:53.652132 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-02-15 03:04:53.652138 | orchestrator | Sunday 15 February 2026 03:04:45 +0000 (0:00:00.618) 0:00:10.520 ******* 2026-02-15 03:04:53.652144 | orchestrator | changed: [testbed-manager] 2026-02-15 03:04:53.652150 | orchestrator | 2026-02-15 03:04:53.652156 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-02-15 03:04:53.652162 | orchestrator | Sunday 15 February 2026 03:04:46 +0000 (0:00:00.481) 0:00:11.001 ******* 2026-02-15 03:04:53.652172 | orchestrator | ok: [testbed-manager] 2026-02-15 03:04:53.652182 | orchestrator | 2026-02-15 03:04:53.652191 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-02-15 03:04:53.652202 | orchestrator | Sunday 15 February 2026 03:04:46 +0000 (0:00:00.758) 0:00:11.759 ******* 2026-02-15 03:04:53.652212 | orchestrator | ok: [testbed-manager] 2026-02-15 03:04:53.652221 | orchestrator | 2026-02-15 03:04:53.652232 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-02-15 03:04:53.652243 | orchestrator | Sunday 15 February 2026 03:04:47 +0000 (0:00:00.456) 0:00:12.216 ******* 2026-02-15 03:04:53.652253 | orchestrator | ok: [testbed-manager] 2026-02-15 03:04:53.652263 | orchestrator | 2026-02-15 03:04:53.652273 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-02-15 03:04:53.652285 | orchestrator | Sunday 15 February 2026 03:04:47 +0000 (0:00:00.485) 0:00:12.702 ******* 2026-02-15 03:04:53.652295 | orchestrator | changed: [testbed-manager] 2026-02-15 03:04:53.652304 | orchestrator | 2026-02-15 03:04:53.652310 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-02-15 03:04:53.652317 | orchestrator | Sunday 15 February 2026 03:04:49 +0000 (0:00:01.325) 0:00:14.027 ******* 2026-02-15 03:04:53.652323 | orchestrator | changed: [testbed-manager] => (item=None) 2026-02-15 03:04:53.652329 | orchestrator | changed: [testbed-manager] 2026-02-15 03:04:53.652335 | orchestrator | 2026-02-15 03:04:53.652341 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-02-15 03:04:53.652348 | orchestrator | Sunday 15 February 2026 03:04:50 +0000 (0:00:00.993) 0:00:15.021 ******* 2026-02-15 03:04:53.652354 | orchestrator | changed: [testbed-manager] 2026-02-15 03:04:53.652361 | orchestrator | 2026-02-15 03:04:53.652367 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-02-15 03:04:53.652373 | orchestrator | Sunday 15 February 2026 03:04:52 +0000 (0:00:01.951) 0:00:16.972 ******* 2026-02-15 03:04:53.652379 | orchestrator | changed: [testbed-manager] 2026-02-15 03:04:53.652385 | orchestrator | 2026-02-15 03:04:53.652392 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:04:53.652398 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:04:53.652406 | orchestrator | 2026-02-15 03:04:53.652412 | orchestrator | 2026-02-15 03:04:53.652418 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:04:53.652433 | orchestrator | Sunday 15 February 2026 03:04:53 +0000 (0:00:00.990) 0:00:17.963 ******* 2026-02-15 03:04:53.652439 | orchestrator | =============================================================================== 2026-02-15 03:04:53.652446 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 7.94s 2026-02-15 03:04:53.652452 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.95s 2026-02-15 03:04:53.652459 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.68s 2026-02-15 03:04:53.652465 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.33s 2026-02-15 03:04:53.652471 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.99s 2026-02-15 03:04:53.652478 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.99s 2026-02-15 03:04:53.652484 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.76s 2026-02-15 03:04:53.652490 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.62s 2026-02-15 03:04:53.652496 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.49s 2026-02-15 03:04:53.652502 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.48s 2026-02-15 03:04:53.652509 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.46s 2026-02-15 03:04:54.059775 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-02-15 03:04:54.090254 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-02-15 03:04:54.090340 | orchestrator | Dload Upload Total Spent Left Speed 2026-02-15 03:04:54.167916 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 196 0 --:--:-- --:--:-- --:--:-- 197 2026-02-15 03:04:54.177675 | orchestrator | + osism apply --environment custom workarounds 2026-02-15 03:04:56.312218 | orchestrator | 2026-02-15 03:04:56 | INFO  | Trying to run play workarounds in environment custom 2026-02-15 03:05:06.595975 | orchestrator | 2026-02-15 03:05:06 | INFO  | Task 3aeaec92-6540-41e8-a015-c1f1e7c59c33 (workarounds) was prepared for execution. 2026-02-15 03:05:06.596116 | orchestrator | 2026-02-15 03:05:06 | INFO  | It takes a moment until task 3aeaec92-6540-41e8-a015-c1f1e7c59c33 (workarounds) has been started and output is visible here. 2026-02-15 03:05:34.182477 | orchestrator | 2026-02-15 03:05:34.182608 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 03:05:34.182627 | orchestrator | 2026-02-15 03:05:34.182639 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-02-15 03:05:34.182651 | orchestrator | Sunday 15 February 2026 03:05:11 +0000 (0:00:00.136) 0:00:00.136 ******* 2026-02-15 03:05:34.182662 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-02-15 03:05:34.182673 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-02-15 03:05:34.182684 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-02-15 03:05:34.182695 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-02-15 03:05:34.182706 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-02-15 03:05:34.182730 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-02-15 03:05:34.182741 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-02-15 03:05:34.182752 | orchestrator | 2026-02-15 03:05:34.182763 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-02-15 03:05:34.182774 | orchestrator | 2026-02-15 03:05:34.182785 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-15 03:05:34.182796 | orchestrator | Sunday 15 February 2026 03:05:12 +0000 (0:00:00.894) 0:00:01.030 ******* 2026-02-15 03:05:34.182807 | orchestrator | ok: [testbed-manager] 2026-02-15 03:05:34.182844 | orchestrator | 2026-02-15 03:05:34.182856 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-02-15 03:05:34.182867 | orchestrator | 2026-02-15 03:05:34.182878 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-02-15 03:05:34.182889 | orchestrator | Sunday 15 February 2026 03:05:14 +0000 (0:00:02.681) 0:00:03.712 ******* 2026-02-15 03:05:34.182899 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:05:34.182910 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:05:34.182921 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:05:34.182931 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:05:34.182942 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:05:34.182952 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:05:34.182963 | orchestrator | 2026-02-15 03:05:34.182973 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-02-15 03:05:34.182984 | orchestrator | 2026-02-15 03:05:34.182998 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-02-15 03:05:34.183011 | orchestrator | Sunday 15 February 2026 03:05:16 +0000 (0:00:02.069) 0:00:05.781 ******* 2026-02-15 03:05:34.183023 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-15 03:05:34.183038 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-15 03:05:34.183050 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-15 03:05:34.183070 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-15 03:05:34.183088 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-15 03:05:34.183154 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-02-15 03:05:34.183175 | orchestrator | 2026-02-15 03:05:34.183188 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-02-15 03:05:34.183201 | orchestrator | Sunday 15 February 2026 03:05:18 +0000 (0:00:01.616) 0:00:07.398 ******* 2026-02-15 03:05:34.183213 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:05:34.183227 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:05:34.183239 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:05:34.183251 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:05:34.183265 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:05:34.183284 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:05:34.183303 | orchestrator | 2026-02-15 03:05:34.183324 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-02-15 03:05:34.183343 | orchestrator | Sunday 15 February 2026 03:05:22 +0000 (0:00:03.977) 0:00:11.375 ******* 2026-02-15 03:05:34.183362 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:05:34.183375 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:05:34.183386 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:05:34.183402 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:05:34.183420 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:05:34.183440 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:05:34.183484 | orchestrator | 2026-02-15 03:05:34.183495 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-02-15 03:05:34.183506 | orchestrator | 2026-02-15 03:05:34.183517 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-02-15 03:05:34.183528 | orchestrator | Sunday 15 February 2026 03:05:23 +0000 (0:00:00.766) 0:00:12.142 ******* 2026-02-15 03:05:34.183539 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:05:34.183549 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:05:34.183560 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:05:34.183571 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:05:34.183581 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:05:34.183592 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:05:34.183613 | orchestrator | changed: [testbed-manager] 2026-02-15 03:05:34.183624 | orchestrator | 2026-02-15 03:05:34.183635 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-02-15 03:05:34.183645 | orchestrator | Sunday 15 February 2026 03:05:25 +0000 (0:00:01.719) 0:00:13.862 ******* 2026-02-15 03:05:34.183656 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:05:34.183667 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:05:34.183677 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:05:34.183688 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:05:34.183699 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:05:34.183710 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:05:34.183741 | orchestrator | changed: [testbed-manager] 2026-02-15 03:05:34.183752 | orchestrator | 2026-02-15 03:05:34.183763 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-02-15 03:05:34.183774 | orchestrator | Sunday 15 February 2026 03:05:26 +0000 (0:00:01.743) 0:00:15.606 ******* 2026-02-15 03:05:34.183785 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:05:34.183796 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:05:34.183807 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:05:34.183817 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:05:34.183828 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:05:34.183839 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:05:34.183849 | orchestrator | ok: [testbed-manager] 2026-02-15 03:05:34.183860 | orchestrator | 2026-02-15 03:05:34.183871 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-02-15 03:05:34.183882 | orchestrator | Sunday 15 February 2026 03:05:28 +0000 (0:00:01.676) 0:00:17.282 ******* 2026-02-15 03:05:34.183893 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:05:34.183903 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:05:34.183914 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:05:34.183925 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:05:34.183936 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:05:34.183946 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:05:34.183957 | orchestrator | changed: [testbed-manager] 2026-02-15 03:05:34.183967 | orchestrator | 2026-02-15 03:05:34.183978 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-02-15 03:05:34.183989 | orchestrator | Sunday 15 February 2026 03:05:30 +0000 (0:00:02.033) 0:00:19.316 ******* 2026-02-15 03:05:34.184000 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:05:34.184011 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:05:34.184021 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:05:34.184032 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:05:34.184043 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:05:34.184053 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:05:34.184064 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:05:34.184075 | orchestrator | 2026-02-15 03:05:34.184085 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-02-15 03:05:34.184284 | orchestrator | 2026-02-15 03:05:34.184302 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-02-15 03:05:34.184313 | orchestrator | Sunday 15 February 2026 03:05:31 +0000 (0:00:00.710) 0:00:20.027 ******* 2026-02-15 03:05:34.184324 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:05:34.184335 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:05:34.184346 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:05:34.184356 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:05:34.184367 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:05:34.184378 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:05:34.184388 | orchestrator | ok: [testbed-manager] 2026-02-15 03:05:34.184399 | orchestrator | 2026-02-15 03:05:34.184409 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:05:34.184422 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:05:34.184434 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:34.184458 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:34.184477 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:34.184488 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:34.184499 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:34.184510 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:34.184521 | orchestrator | 2026-02-15 03:05:34.184532 | orchestrator | 2026-02-15 03:05:34.184543 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:05:34.184552 | orchestrator | Sunday 15 February 2026 03:05:34 +0000 (0:00:02.923) 0:00:22.950 ******* 2026-02-15 03:05:34.184561 | orchestrator | =============================================================================== 2026-02-15 03:05:34.184571 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.98s 2026-02-15 03:05:34.184581 | orchestrator | Install python3-docker -------------------------------------------------- 2.92s 2026-02-15 03:05:34.184590 | orchestrator | Apply netplan configuration --------------------------------------------- 2.68s 2026-02-15 03:05:34.184600 | orchestrator | Apply netplan configuration --------------------------------------------- 2.07s 2026-02-15 03:05:34.184609 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 2.03s 2026-02-15 03:05:34.184619 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.74s 2026-02-15 03:05:34.184628 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.72s 2026-02-15 03:05:34.184637 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.68s 2026-02-15 03:05:34.184647 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.62s 2026-02-15 03:05:34.184656 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.89s 2026-02-15 03:05:34.184666 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.77s 2026-02-15 03:05:34.184687 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.71s 2026-02-15 03:05:34.979887 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-02-15 03:05:47.201452 | orchestrator | 2026-02-15 03:05:47 | INFO  | Task 900aa7cf-f6d9-4b5e-8033-cd7f9b2ae5f6 (reboot) was prepared for execution. 2026-02-15 03:05:47.201555 | orchestrator | 2026-02-15 03:05:47 | INFO  | It takes a moment until task 900aa7cf-f6d9-4b5e-8033-cd7f9b2ae5f6 (reboot) has been started and output is visible here. 2026-02-15 03:05:58.244073 | orchestrator | 2026-02-15 03:05:58.244239 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-15 03:05:58.244257 | orchestrator | 2026-02-15 03:05:58.244269 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-15 03:05:58.244280 | orchestrator | Sunday 15 February 2026 03:05:51 +0000 (0:00:00.226) 0:00:00.226 ******* 2026-02-15 03:05:58.244292 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:05:58.244304 | orchestrator | 2026-02-15 03:05:58.244315 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-15 03:05:58.244326 | orchestrator | Sunday 15 February 2026 03:05:52 +0000 (0:00:00.131) 0:00:00.357 ******* 2026-02-15 03:05:58.244336 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:05:58.244347 | orchestrator | 2026-02-15 03:05:58.244358 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-15 03:05:58.244392 | orchestrator | Sunday 15 February 2026 03:05:52 +0000 (0:00:00.945) 0:00:01.303 ******* 2026-02-15 03:05:58.244404 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:05:58.244414 | orchestrator | 2026-02-15 03:05:58.244425 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-15 03:05:58.244436 | orchestrator | 2026-02-15 03:05:58.244447 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-15 03:05:58.244458 | orchestrator | Sunday 15 February 2026 03:05:53 +0000 (0:00:00.123) 0:00:01.426 ******* 2026-02-15 03:05:58.244468 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:05:58.244479 | orchestrator | 2026-02-15 03:05:58.244489 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-15 03:05:58.244500 | orchestrator | Sunday 15 February 2026 03:05:53 +0000 (0:00:00.118) 0:00:01.545 ******* 2026-02-15 03:05:58.244510 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:05:58.244521 | orchestrator | 2026-02-15 03:05:58.244532 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-15 03:05:58.244542 | orchestrator | Sunday 15 February 2026 03:05:53 +0000 (0:00:00.672) 0:00:02.217 ******* 2026-02-15 03:05:58.244553 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:05:58.244564 | orchestrator | 2026-02-15 03:05:58.244574 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-15 03:05:58.244585 | orchestrator | 2026-02-15 03:05:58.244596 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-15 03:05:58.244606 | orchestrator | Sunday 15 February 2026 03:05:54 +0000 (0:00:00.129) 0:00:02.347 ******* 2026-02-15 03:05:58.244620 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:05:58.244632 | orchestrator | 2026-02-15 03:05:58.244644 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-15 03:05:58.244657 | orchestrator | Sunday 15 February 2026 03:05:54 +0000 (0:00:00.261) 0:00:02.609 ******* 2026-02-15 03:05:58.244669 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:05:58.244682 | orchestrator | 2026-02-15 03:05:58.244710 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-15 03:05:58.244721 | orchestrator | Sunday 15 February 2026 03:05:54 +0000 (0:00:00.707) 0:00:03.316 ******* 2026-02-15 03:05:58.244732 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:05:58.244745 | orchestrator | 2026-02-15 03:05:58.244765 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-15 03:05:58.244783 | orchestrator | 2026-02-15 03:05:58.244801 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-15 03:05:58.244820 | orchestrator | Sunday 15 February 2026 03:05:55 +0000 (0:00:00.143) 0:00:03.460 ******* 2026-02-15 03:05:58.244838 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:05:58.244858 | orchestrator | 2026-02-15 03:05:58.244877 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-15 03:05:58.244896 | orchestrator | Sunday 15 February 2026 03:05:55 +0000 (0:00:00.128) 0:00:03.588 ******* 2026-02-15 03:05:58.244915 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:05:58.244934 | orchestrator | 2026-02-15 03:05:58.244952 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-15 03:05:58.244967 | orchestrator | Sunday 15 February 2026 03:05:55 +0000 (0:00:00.662) 0:00:04.251 ******* 2026-02-15 03:05:58.244977 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:05:58.244988 | orchestrator | 2026-02-15 03:05:58.244999 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-15 03:05:58.245010 | orchestrator | 2026-02-15 03:05:58.245020 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-15 03:05:58.245031 | orchestrator | Sunday 15 February 2026 03:05:56 +0000 (0:00:00.136) 0:00:04.387 ******* 2026-02-15 03:05:58.245042 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:05:58.245052 | orchestrator | 2026-02-15 03:05:58.245063 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-15 03:05:58.245083 | orchestrator | Sunday 15 February 2026 03:05:56 +0000 (0:00:00.101) 0:00:04.489 ******* 2026-02-15 03:05:58.245094 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:05:58.245105 | orchestrator | 2026-02-15 03:05:58.245116 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-15 03:05:58.245126 | orchestrator | Sunday 15 February 2026 03:05:56 +0000 (0:00:00.666) 0:00:05.156 ******* 2026-02-15 03:05:58.245137 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:05:58.245148 | orchestrator | 2026-02-15 03:05:58.245197 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-02-15 03:05:58.245209 | orchestrator | 2026-02-15 03:05:58.245220 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-02-15 03:05:58.245230 | orchestrator | Sunday 15 February 2026 03:05:56 +0000 (0:00:00.131) 0:00:05.287 ******* 2026-02-15 03:05:58.245253 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:05:58.245276 | orchestrator | 2026-02-15 03:05:58.245287 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-02-15 03:05:58.245298 | orchestrator | Sunday 15 February 2026 03:05:57 +0000 (0:00:00.117) 0:00:05.405 ******* 2026-02-15 03:05:58.245309 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:05:58.245319 | orchestrator | 2026-02-15 03:05:58.245330 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-02-15 03:05:58.245341 | orchestrator | Sunday 15 February 2026 03:05:57 +0000 (0:00:00.685) 0:00:06.090 ******* 2026-02-15 03:05:58.245369 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:05:58.245380 | orchestrator | 2026-02-15 03:05:58.245391 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:05:58.245403 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:58.245416 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:58.245427 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:58.245437 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:58.245448 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:58.245459 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:05:58.245469 | orchestrator | 2026-02-15 03:05:58.245480 | orchestrator | 2026-02-15 03:05:58.245491 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:05:58.245502 | orchestrator | Sunday 15 February 2026 03:05:57 +0000 (0:00:00.042) 0:00:06.133 ******* 2026-02-15 03:05:58.245513 | orchestrator | =============================================================================== 2026-02-15 03:05:58.245523 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.34s 2026-02-15 03:05:58.245534 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.86s 2026-02-15 03:05:58.245545 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.71s 2026-02-15 03:05:58.606710 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-02-15 03:06:10.863342 | orchestrator | 2026-02-15 03:06:10 | INFO  | Task 036ac020-bb8d-40ec-9a1c-6c2660eda07f (wait-for-connection) was prepared for execution. 2026-02-15 03:06:10.864062 | orchestrator | 2026-02-15 03:06:10 | INFO  | It takes a moment until task 036ac020-bb8d-40ec-9a1c-6c2660eda07f (wait-for-connection) has been started and output is visible here. 2026-02-15 03:06:27.616862 | orchestrator | 2026-02-15 03:06:27.616959 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-02-15 03:06:27.616970 | orchestrator | 2026-02-15 03:06:27.616978 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-02-15 03:06:27.616985 | orchestrator | Sunday 15 February 2026 03:06:15 +0000 (0:00:00.260) 0:00:00.260 ******* 2026-02-15 03:06:27.616992 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:06:27.617000 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:06:27.617007 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:06:27.617013 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:06:27.617020 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:06:27.617026 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:06:27.617033 | orchestrator | 2026-02-15 03:06:27.617040 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:06:27.617048 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:06:27.617056 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:06:27.617063 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:06:27.617070 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:06:27.617076 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:06:27.617083 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:06:27.617090 | orchestrator | 2026-02-15 03:06:27.617097 | orchestrator | 2026-02-15 03:06:27.617104 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:06:27.617110 | orchestrator | Sunday 15 February 2026 03:06:27 +0000 (0:00:11.654) 0:00:11.915 ******* 2026-02-15 03:06:27.617117 | orchestrator | =============================================================================== 2026-02-15 03:06:27.617124 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.65s 2026-02-15 03:06:27.979660 | orchestrator | + osism apply hddtemp 2026-02-15 03:06:40.187586 | orchestrator | 2026-02-15 03:06:40 | INFO  | Task 92480d9f-82cc-466b-9dd8-33c97c1558dd (hddtemp) was prepared for execution. 2026-02-15 03:06:40.187785 | orchestrator | 2026-02-15 03:06:40 | INFO  | It takes a moment until task 92480d9f-82cc-466b-9dd8-33c97c1558dd (hddtemp) has been started and output is visible here. 2026-02-15 03:07:09.794381 | orchestrator | 2026-02-15 03:07:09.794478 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-02-15 03:07:09.794492 | orchestrator | 2026-02-15 03:07:09.794500 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-02-15 03:07:09.794508 | orchestrator | Sunday 15 February 2026 03:06:44 +0000 (0:00:00.290) 0:00:00.290 ******* 2026-02-15 03:07:09.794516 | orchestrator | ok: [testbed-manager] 2026-02-15 03:07:09.794524 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:07:09.794531 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:07:09.794538 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:07:09.794546 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:07:09.794553 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:07:09.794560 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:07:09.794567 | orchestrator | 2026-02-15 03:07:09.794574 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-02-15 03:07:09.794582 | orchestrator | Sunday 15 February 2026 03:06:45 +0000 (0:00:00.816) 0:00:01.107 ******* 2026-02-15 03:07:09.794591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:07:09.794620 | orchestrator | 2026-02-15 03:07:09.794629 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-02-15 03:07:09.794636 | orchestrator | Sunday 15 February 2026 03:06:47 +0000 (0:00:01.350) 0:00:02.458 ******* 2026-02-15 03:07:09.794643 | orchestrator | ok: [testbed-manager] 2026-02-15 03:07:09.794649 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:07:09.794656 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:07:09.794662 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:07:09.794670 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:07:09.794677 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:07:09.794683 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:07:09.794689 | orchestrator | 2026-02-15 03:07:09.794696 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-02-15 03:07:09.794703 | orchestrator | Sunday 15 February 2026 03:06:48 +0000 (0:00:01.779) 0:00:04.237 ******* 2026-02-15 03:07:09.794710 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:07:09.794719 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:07:09.794725 | orchestrator | changed: [testbed-manager] 2026-02-15 03:07:09.794732 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:07:09.794738 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:07:09.794745 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:07:09.794752 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:07:09.794759 | orchestrator | 2026-02-15 03:07:09.794767 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-02-15 03:07:09.794774 | orchestrator | Sunday 15 February 2026 03:06:50 +0000 (0:00:01.266) 0:00:05.504 ******* 2026-02-15 03:07:09.794780 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:07:09.794787 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:07:09.794794 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:07:09.794801 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:07:09.794808 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:07:09.794829 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:07:09.794837 | orchestrator | ok: [testbed-manager] 2026-02-15 03:07:09.794844 | orchestrator | 2026-02-15 03:07:09.794851 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-02-15 03:07:09.794857 | orchestrator | Sunday 15 February 2026 03:06:51 +0000 (0:00:01.317) 0:00:06.821 ******* 2026-02-15 03:07:09.794865 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:07:09.794872 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:07:09.794879 | orchestrator | changed: [testbed-manager] 2026-02-15 03:07:09.794885 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:07:09.794892 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:07:09.794900 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:07:09.794907 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:07:09.794915 | orchestrator | 2026-02-15 03:07:09.794923 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-02-15 03:07:09.794930 | orchestrator | Sunday 15 February 2026 03:06:52 +0000 (0:00:00.883) 0:00:07.704 ******* 2026-02-15 03:07:09.794937 | orchestrator | changed: [testbed-manager] 2026-02-15 03:07:09.794944 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:07:09.794951 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:07:09.794957 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:07:09.794964 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:07:09.794971 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:07:09.794978 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:07:09.794984 | orchestrator | 2026-02-15 03:07:09.794990 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-02-15 03:07:09.794997 | orchestrator | Sunday 15 February 2026 03:07:05 +0000 (0:00:13.548) 0:00:21.252 ******* 2026-02-15 03:07:09.795004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:07:09.795018 | orchestrator | 2026-02-15 03:07:09.795025 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-02-15 03:07:09.795031 | orchestrator | Sunday 15 February 2026 03:07:07 +0000 (0:00:01.403) 0:00:22.656 ******* 2026-02-15 03:07:09.795038 | orchestrator | changed: [testbed-manager] 2026-02-15 03:07:09.795044 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:07:09.795051 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:07:09.795057 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:07:09.795064 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:07:09.795070 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:07:09.795077 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:07:09.795083 | orchestrator | 2026-02-15 03:07:09.795089 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:07:09.795097 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:07:09.795124 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:07:09.795131 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:07:09.795136 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:07:09.795141 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:07:09.795146 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:07:09.795152 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:07:09.795158 | orchestrator | 2026-02-15 03:07:09.795164 | orchestrator | 2026-02-15 03:07:09.795170 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:07:09.795176 | orchestrator | Sunday 15 February 2026 03:07:09 +0000 (0:00:02.063) 0:00:24.719 ******* 2026-02-15 03:07:09.795183 | orchestrator | =============================================================================== 2026-02-15 03:07:09.795189 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.55s 2026-02-15 03:07:09.795195 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.06s 2026-02-15 03:07:09.795202 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.78s 2026-02-15 03:07:09.795208 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.40s 2026-02-15 03:07:09.795215 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.35s 2026-02-15 03:07:09.795221 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.32s 2026-02-15 03:07:09.795228 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.27s 2026-02-15 03:07:09.795234 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.88s 2026-02-15 03:07:09.795241 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.82s 2026-02-15 03:07:10.179706 | orchestrator | ++ semver 9.5.0 7.1.1 2026-02-15 03:07:10.235296 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-15 03:07:10.235463 | orchestrator | + sudo systemctl restart manager.service 2026-02-15 03:07:23.868183 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-15 03:07:23.868298 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-02-15 03:07:23.868334 | orchestrator | + local max_attempts=60 2026-02-15 03:07:23.868429 | orchestrator | + local name=ceph-ansible 2026-02-15 03:07:23.868441 | orchestrator | + local attempt_num=1 2026-02-15 03:07:23.868451 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:07:23.901260 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-15 03:07:23.901326 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:07:23.901332 | orchestrator | + sleep 5 2026-02-15 03:07:28.909213 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:07:28.949122 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-15 03:07:28.949211 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:07:28.949223 | orchestrator | + sleep 5 2026-02-15 03:07:33.951637 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:07:33.977353 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-15 03:07:33.977531 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:07:33.977545 | orchestrator | + sleep 5 2026-02-15 03:07:38.981750 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:07:39.017713 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-15 03:07:39.017793 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:07:39.017803 | orchestrator | + sleep 5 2026-02-15 03:07:44.022588 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:07:44.062075 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-15 03:07:44.062190 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:07:44.062210 | orchestrator | + sleep 5 2026-02-15 03:07:49.065563 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:07:49.110840 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-15 03:07:49.110925 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:07:49.110938 | orchestrator | + sleep 5 2026-02-15 03:07:54.116401 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:07:54.156380 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-02-15 03:07:54.156494 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:07:54.156522 | orchestrator | + sleep 5 2026-02-15 03:07:59.163382 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:07:59.214742 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-15 03:07:59.214811 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:07:59.214817 | orchestrator | + sleep 5 2026-02-15 03:08:04.218239 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:08:04.266340 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-15 03:08:04.266503 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:08:04.266523 | orchestrator | + sleep 5 2026-02-15 03:08:09.271176 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:08:09.307185 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-15 03:08:09.307281 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:08:09.307292 | orchestrator | + sleep 5 2026-02-15 03:08:14.313133 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:08:14.355261 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-15 03:08:14.355379 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:08:14.355401 | orchestrator | + sleep 5 2026-02-15 03:08:19.361694 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:08:19.403026 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-15 03:08:19.403116 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:08:19.403128 | orchestrator | + sleep 5 2026-02-15 03:08:24.408415 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:08:24.453654 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-02-15 03:08:24.453804 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-02-15 03:08:24.453828 | orchestrator | + sleep 5 2026-02-15 03:08:29.459312 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-02-15 03:08:29.497479 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-15 03:08:29.497598 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-15 03:08:29.497611 | orchestrator | + local max_attempts=60 2026-02-15 03:08:29.497620 | orchestrator | + local name=kolla-ansible 2026-02-15 03:08:29.497629 | orchestrator | + local attempt_num=1 2026-02-15 03:08:29.498553 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-15 03:08:29.528405 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-15 03:08:29.528473 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-15 03:08:29.528526 | orchestrator | + local max_attempts=60 2026-02-15 03:08:29.528533 | orchestrator | + local name=osism-ansible 2026-02-15 03:08:29.528538 | orchestrator | + local attempt_num=1 2026-02-15 03:08:29.528544 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-15 03:08:29.555943 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-15 03:08:29.556029 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-15 03:08:29.556043 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-02-15 03:08:29.737305 | orchestrator | ARA in ceph-ansible already disabled. 2026-02-15 03:08:29.900559 | orchestrator | ARA in kolla-ansible already disabled. 2026-02-15 03:08:30.075835 | orchestrator | ARA in osism-ansible already disabled. 2026-02-15 03:08:30.267850 | orchestrator | ARA in osism-kubernetes already disabled. 2026-02-15 03:08:30.269365 | orchestrator | + osism apply gather-facts 2026-02-15 03:08:42.594235 | orchestrator | 2026-02-15 03:08:42 | INFO  | Task a7d06300-8095-4bb0-9dc0-89b2e49fcddd (gather-facts) was prepared for execution. 2026-02-15 03:08:42.594311 | orchestrator | 2026-02-15 03:08:42 | INFO  | It takes a moment until task a7d06300-8095-4bb0-9dc0-89b2e49fcddd (gather-facts) has been started and output is visible here. 2026-02-15 03:08:57.054388 | orchestrator | 2026-02-15 03:08:57.054484 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-15 03:08:57.054495 | orchestrator | 2026-02-15 03:08:57.054504 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-15 03:08:57.054512 | orchestrator | Sunday 15 February 2026 03:08:47 +0000 (0:00:00.244) 0:00:00.244 ******* 2026-02-15 03:08:57.054520 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:08:57.054529 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:08:57.054560 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:08:57.054574 | orchestrator | ok: [testbed-manager] 2026-02-15 03:08:57.054586 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:08:57.054594 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:08:57.054601 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:08:57.054609 | orchestrator | 2026-02-15 03:08:57.054616 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-15 03:08:57.054624 | orchestrator | 2026-02-15 03:08:57.054631 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-15 03:08:57.054639 | orchestrator | Sunday 15 February 2026 03:08:55 +0000 (0:00:08.595) 0:00:08.840 ******* 2026-02-15 03:08:57.054647 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:08:57.054655 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:08:57.054662 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:08:57.054670 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:08:57.054677 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:08:57.054684 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:08:57.054691 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:08:57.054698 | orchestrator | 2026-02-15 03:08:57.054706 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:08:57.054713 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:08:57.054722 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:08:57.054729 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:08:57.054736 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:08:57.054743 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:08:57.054751 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:08:57.054779 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:08:57.054787 | orchestrator | 2026-02-15 03:08:57.054794 | orchestrator | 2026-02-15 03:08:57.054801 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:08:57.054808 | orchestrator | Sunday 15 February 2026 03:08:56 +0000 (0:00:00.588) 0:00:09.428 ******* 2026-02-15 03:08:57.054816 | orchestrator | =============================================================================== 2026-02-15 03:08:57.054823 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.60s 2026-02-15 03:08:57.054830 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2026-02-15 03:08:57.430521 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-02-15 03:08:57.447675 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-02-15 03:08:57.469673 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-02-15 03:08:57.484918 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-02-15 03:08:57.499947 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-02-15 03:08:57.519027 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-02-15 03:08:57.543304 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-02-15 03:08:57.560754 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-02-15 03:08:57.582621 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-02-15 03:08:57.600657 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-02-15 03:08:57.618179 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-02-15 03:08:57.635190 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-02-15 03:08:57.657701 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-02-15 03:08:57.672153 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-02-15 03:08:57.684967 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-02-15 03:08:57.696400 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-02-15 03:08:57.709629 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-02-15 03:08:57.729995 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-02-15 03:08:57.750438 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-02-15 03:08:57.772672 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-02-15 03:08:57.797377 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-02-15 03:08:57.814951 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-02-15 03:08:57.827469 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-02-15 03:08:57.842283 | orchestrator | + [[ false == \t\r\u\e ]] 2026-02-15 03:08:58.024533 | orchestrator | ok: Runtime: 0:25:17.748718 2026-02-15 03:08:58.374129 | 2026-02-15 03:08:58.374333 | TASK [Deploy services] 2026-02-15 03:08:59.266506 | orchestrator | 2026-02-15 03:08:59.266647 | orchestrator | # DEPLOY SERVICES 2026-02-15 03:08:59.266659 | orchestrator | 2026-02-15 03:08:59.266665 | orchestrator | + set -e 2026-02-15 03:08:59.266670 | orchestrator | + echo 2026-02-15 03:08:59.266676 | orchestrator | + echo '# DEPLOY SERVICES' 2026-02-15 03:08:59.266682 | orchestrator | + echo 2026-02-15 03:08:59.266703 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 03:08:59.266712 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 03:08:59.266718 | orchestrator | ++ INTERACTIVE=false 2026-02-15 03:08:59.266723 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 03:08:59.266732 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 03:08:59.266736 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 03:08:59.266742 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 03:08:59.266746 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 03:08:59.266753 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 03:08:59.266757 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 03:08:59.266763 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 03:08:59.266767 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 03:08:59.266773 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 03:08:59.266777 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 03:08:59.266781 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 03:08:59.266786 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 03:08:59.266790 | orchestrator | ++ export ARA=false 2026-02-15 03:08:59.266794 | orchestrator | ++ ARA=false 2026-02-15 03:08:59.266798 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 03:08:59.266802 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 03:08:59.266805 | orchestrator | ++ export TEMPEST=false 2026-02-15 03:08:59.266809 | orchestrator | ++ TEMPEST=false 2026-02-15 03:08:59.266813 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 03:08:59.266817 | orchestrator | ++ IS_ZUUL=true 2026-02-15 03:08:59.266821 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 03:08:59.266825 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 03:08:59.266829 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 03:08:59.266832 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 03:08:59.266836 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 03:08:59.266840 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 03:08:59.266844 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 03:08:59.266848 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 03:08:59.266852 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 03:08:59.266859 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 03:08:59.266864 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-15 03:08:59.273873 | orchestrator | + set -e 2026-02-15 03:08:59.274279 | orchestrator | 2026-02-15 03:08:59.274299 | orchestrator | # PULL IMAGES 2026-02-15 03:08:59.274305 | orchestrator | 2026-02-15 03:08:59.274311 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 03:08:59.274320 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 03:08:59.274327 | orchestrator | ++ INTERACTIVE=false 2026-02-15 03:08:59.274332 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 03:08:59.274338 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 03:08:59.274343 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 03:08:59.274348 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 03:08:59.274358 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 03:08:59.274378 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 03:08:59.274385 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 03:08:59.274391 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 03:08:59.274397 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 03:08:59.274403 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 03:08:59.274410 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 03:08:59.274416 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 03:08:59.274422 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 03:08:59.274498 | orchestrator | ++ export ARA=false 2026-02-15 03:08:59.274503 | orchestrator | ++ ARA=false 2026-02-15 03:08:59.274510 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 03:08:59.274515 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 03:08:59.274519 | orchestrator | ++ export TEMPEST=false 2026-02-15 03:08:59.274523 | orchestrator | ++ TEMPEST=false 2026-02-15 03:08:59.274527 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 03:08:59.274531 | orchestrator | ++ IS_ZUUL=true 2026-02-15 03:08:59.274535 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 03:08:59.274539 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 03:08:59.274557 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 03:08:59.274562 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 03:08:59.274566 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 03:08:59.274570 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 03:08:59.274594 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 03:08:59.274598 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 03:08:59.274602 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 03:08:59.274606 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 03:08:59.274610 | orchestrator | + echo 2026-02-15 03:08:59.274614 | orchestrator | + echo '# PULL IMAGES' 2026-02-15 03:08:59.274618 | orchestrator | + echo 2026-02-15 03:08:59.274628 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-15 03:08:59.321423 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-15 03:08:59.321517 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-15 03:09:01.446266 | orchestrator | 2026-02-15 03:09:01 | INFO  | Trying to run play pull-images in environment custom 2026-02-15 03:09:11.648491 | orchestrator | 2026-02-15 03:09:11 | INFO  | Task 6c5b81d7-53b1-455e-ae00-8253804a9e69 (pull-images) was prepared for execution. 2026-02-15 03:09:11.648660 | orchestrator | 2026-02-15 03:09:11 | INFO  | Task 6c5b81d7-53b1-455e-ae00-8253804a9e69 is running in background. No more output. Check ARA for logs. 2026-02-15 03:09:12.013269 | orchestrator | + sh -c /opt/configuration/scripts/deploy/001-helpers.sh 2026-02-15 03:09:24.424348 | orchestrator | 2026-02-15 03:09:24 | INFO  | Task 9da6eb8f-9f7c-4b7b-9249-1eea50aeed3b (cgit) was prepared for execution. 2026-02-15 03:09:24.424458 | orchestrator | 2026-02-15 03:09:24 | INFO  | Task 9da6eb8f-9f7c-4b7b-9249-1eea50aeed3b is running in background. No more output. Check ARA for logs. 2026-02-15 03:09:37.226229 | orchestrator | 2026-02-15 03:09:37 | INFO  | Task f5964bb9-aed1-4c19-a38f-8d9a72062e36 (dotfiles) was prepared for execution. 2026-02-15 03:09:37.226368 | orchestrator | 2026-02-15 03:09:37 | INFO  | Task f5964bb9-aed1-4c19-a38f-8d9a72062e36 is running in background. No more output. Check ARA for logs. 2026-02-15 03:09:50.577866 | orchestrator | 2026-02-15 03:09:50 | INFO  | Task f97e0d72-0890-47e9-ac59-e8306af73614 (homer) was prepared for execution. 2026-02-15 03:09:50.577992 | orchestrator | 2026-02-15 03:09:50 | INFO  | Task f97e0d72-0890-47e9-ac59-e8306af73614 is running in background. No more output. Check ARA for logs. 2026-02-15 03:10:03.216902 | orchestrator | 2026-02-15 03:10:03 | INFO  | Task d7a59d82-036a-4186-8a2f-09d65ad4b0a8 (phpmyadmin) was prepared for execution. 2026-02-15 03:10:03.217017 | orchestrator | 2026-02-15 03:10:03 | INFO  | Task d7a59d82-036a-4186-8a2f-09d65ad4b0a8 is running in background. No more output. Check ARA for logs. 2026-02-15 03:10:15.845377 | orchestrator | 2026-02-15 03:10:15 | INFO  | Task 6d608eab-3143-4df0-a1af-ca6320268d9f (sosreport) was prepared for execution. 2026-02-15 03:10:15.845484 | orchestrator | 2026-02-15 03:10:15 | INFO  | Task 6d608eab-3143-4df0-a1af-ca6320268d9f is running in background. No more output. Check ARA for logs. 2026-02-15 03:10:16.197902 | orchestrator | + sh -c /opt/configuration/scripts/deploy/500-kubernetes.sh 2026-02-15 03:10:16.206613 | orchestrator | + set -e 2026-02-15 03:10:16.206673 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 03:10:16.206681 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 03:10:16.206687 | orchestrator | ++ INTERACTIVE=false 2026-02-15 03:10:16.206736 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 03:10:16.206742 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 03:10:16.206783 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 03:10:16.206982 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 03:10:16.207064 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 03:10:16.207079 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 03:10:16.207090 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 03:10:16.207102 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 03:10:16.207115 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 03:10:16.207127 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 03:10:16.207138 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 03:10:16.207150 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 03:10:16.207161 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 03:10:16.207173 | orchestrator | ++ export ARA=false 2026-02-15 03:10:16.208169 | orchestrator | ++ ARA=false 2026-02-15 03:10:16.208209 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 03:10:16.208246 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 03:10:16.208258 | orchestrator | ++ export TEMPEST=false 2026-02-15 03:10:16.208269 | orchestrator | ++ TEMPEST=false 2026-02-15 03:10:16.208280 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 03:10:16.208291 | orchestrator | ++ IS_ZUUL=true 2026-02-15 03:10:16.208316 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 03:10:16.208333 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 03:10:16.208345 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 03:10:16.208356 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 03:10:16.208367 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 03:10:16.208378 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 03:10:16.208389 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 03:10:16.208400 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 03:10:16.208411 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 03:10:16.208422 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 03:10:16.209790 | orchestrator | ++ semver 9.5.0 8.0.3 2026-02-15 03:10:16.271278 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-15 03:10:16.271376 | orchestrator | + osism apply frr 2026-02-15 03:10:28.592800 | orchestrator | 2026-02-15 03:10:28 | INFO  | Task d1655e50-373e-4cab-b19e-c64505c15239 (frr) was prepared for execution. 2026-02-15 03:10:28.594205 | orchestrator | 2026-02-15 03:10:28 | INFO  | It takes a moment until task d1655e50-373e-4cab-b19e-c64505c15239 (frr) has been started and output is visible here. 2026-02-15 03:11:07.346249 | orchestrator | 2026-02-15 03:11:07.346333 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-15 03:11:07.346342 | orchestrator | 2026-02-15 03:11:07.346347 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-15 03:11:07.346357 | orchestrator | Sunday 15 February 2026 03:10:35 +0000 (0:00:00.262) 0:00:00.262 ******* 2026-02-15 03:11:07.346363 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-15 03:11:07.346369 | orchestrator | 2026-02-15 03:11:07.346374 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-15 03:11:07.346379 | orchestrator | Sunday 15 February 2026 03:10:35 +0000 (0:00:00.262) 0:00:00.525 ******* 2026-02-15 03:11:07.346385 | orchestrator | changed: [testbed-manager] 2026-02-15 03:11:07.346390 | orchestrator | 2026-02-15 03:11:07.346395 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-15 03:11:07.346401 | orchestrator | Sunday 15 February 2026 03:10:37 +0000 (0:00:02.369) 0:00:02.895 ******* 2026-02-15 03:11:07.346406 | orchestrator | changed: [testbed-manager] 2026-02-15 03:11:07.346410 | orchestrator | 2026-02-15 03:11:07.346415 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-15 03:11:07.346420 | orchestrator | Sunday 15 February 2026 03:10:54 +0000 (0:00:16.338) 0:00:19.233 ******* 2026-02-15 03:11:07.346424 | orchestrator | ok: [testbed-manager] 2026-02-15 03:11:07.346430 | orchestrator | 2026-02-15 03:11:07.346435 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-15 03:11:07.346439 | orchestrator | Sunday 15 February 2026 03:10:55 +0000 (0:00:01.763) 0:00:20.997 ******* 2026-02-15 03:11:07.346444 | orchestrator | changed: [testbed-manager] 2026-02-15 03:11:07.346449 | orchestrator | 2026-02-15 03:11:07.346453 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-15 03:11:07.346458 | orchestrator | Sunday 15 February 2026 03:10:57 +0000 (0:00:01.218) 0:00:22.215 ******* 2026-02-15 03:11:07.346463 | orchestrator | ok: [testbed-manager] 2026-02-15 03:11:07.346467 | orchestrator | 2026-02-15 03:11:07.346472 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-15 03:11:07.346478 | orchestrator | Sunday 15 February 2026 03:10:58 +0000 (0:00:01.622) 0:00:23.837 ******* 2026-02-15 03:11:07.346482 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:11:07.346487 | orchestrator | 2026-02-15 03:11:07.346491 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-15 03:11:07.346496 | orchestrator | Sunday 15 February 2026 03:10:58 +0000 (0:00:00.151) 0:00:23.989 ******* 2026-02-15 03:11:07.346520 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:11:07.346529 | orchestrator | 2026-02-15 03:11:07.346537 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-15 03:11:07.346545 | orchestrator | Sunday 15 February 2026 03:10:59 +0000 (0:00:00.187) 0:00:24.177 ******* 2026-02-15 03:11:07.346551 | orchestrator | changed: [testbed-manager] 2026-02-15 03:11:07.346559 | orchestrator | 2026-02-15 03:11:07.346566 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-15 03:11:07.346574 | orchestrator | Sunday 15 February 2026 03:11:00 +0000 (0:00:01.389) 0:00:25.566 ******* 2026-02-15 03:11:07.346593 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-15 03:11:07.346601 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-15 03:11:07.346618 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-15 03:11:07.346625 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-15 03:11:07.346630 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-15 03:11:07.346634 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-15 03:11:07.346639 | orchestrator | 2026-02-15 03:11:07.346644 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-15 03:11:07.346648 | orchestrator | Sunday 15 February 2026 03:11:03 +0000 (0:00:02.930) 0:00:28.497 ******* 2026-02-15 03:11:07.346653 | orchestrator | ok: [testbed-manager] 2026-02-15 03:11:07.346657 | orchestrator | 2026-02-15 03:11:07.346662 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-02-15 03:11:07.346666 | orchestrator | Sunday 15 February 2026 03:11:05 +0000 (0:00:01.856) 0:00:30.354 ******* 2026-02-15 03:11:07.346671 | orchestrator | changed: [testbed-manager] 2026-02-15 03:11:07.346675 | orchestrator | 2026-02-15 03:11:07.346680 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:11:07.346685 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:11:07.346690 | orchestrator | 2026-02-15 03:11:07.346694 | orchestrator | 2026-02-15 03:11:07.346703 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:11:07.346708 | orchestrator | Sunday 15 February 2026 03:11:06 +0000 (0:00:01.608) 0:00:31.962 ******* 2026-02-15 03:11:07.346712 | orchestrator | =============================================================================== 2026-02-15 03:11:07.346717 | orchestrator | osism.services.frr : Install frr package ------------------------------- 16.34s 2026-02-15 03:11:07.346721 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.93s 2026-02-15 03:11:07.346726 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.37s 2026-02-15 03:11:07.346731 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.86s 2026-02-15 03:11:07.346735 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.76s 2026-02-15 03:11:07.346751 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.62s 2026-02-15 03:11:07.346755 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.61s 2026-02-15 03:11:07.346760 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.39s 2026-02-15 03:11:07.346764 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.22s 2026-02-15 03:11:07.346769 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.26s 2026-02-15 03:11:07.346774 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.19s 2026-02-15 03:11:07.346778 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.15s 2026-02-15 03:11:07.761338 | orchestrator | + osism apply kubernetes 2026-02-15 03:11:10.080663 | orchestrator | 2026-02-15 03:11:10 | INFO  | Task ae4bd676-971f-4ed8-8e22-7cfd7849d600 (kubernetes) was prepared for execution. 2026-02-15 03:11:10.080735 | orchestrator | 2026-02-15 03:11:10 | INFO  | It takes a moment until task ae4bd676-971f-4ed8-8e22-7cfd7849d600 (kubernetes) has been started and output is visible here. 2026-02-15 03:11:38.058201 | orchestrator | 2026-02-15 03:11:38.058296 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-15 03:11:38.058307 | orchestrator | 2026-02-15 03:11:38.058315 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-15 03:11:38.058324 | orchestrator | Sunday 15 February 2026 03:11:15 +0000 (0:00:00.242) 0:00:00.242 ******* 2026-02-15 03:11:38.058331 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:11:38.058339 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:11:38.058346 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:11:38.058353 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:11:38.058360 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:11:38.058367 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:11:38.058373 | orchestrator | 2026-02-15 03:11:38.058380 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-15 03:11:38.058387 | orchestrator | Sunday 15 February 2026 03:11:16 +0000 (0:00:00.891) 0:00:01.134 ******* 2026-02-15 03:11:38.058394 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:11:38.058402 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:11:38.058408 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:11:38.058423 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:11:38.058430 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:11:38.058437 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:11:38.058443 | orchestrator | 2026-02-15 03:11:38.058450 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-15 03:11:38.058459 | orchestrator | Sunday 15 February 2026 03:11:17 +0000 (0:00:00.878) 0:00:02.012 ******* 2026-02-15 03:11:38.058466 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:11:38.058473 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:11:38.058480 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:11:38.058486 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:11:38.058493 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:11:38.058500 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:11:38.058507 | orchestrator | 2026-02-15 03:11:38.058514 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-15 03:11:38.058521 | orchestrator | Sunday 15 February 2026 03:11:18 +0000 (0:00:00.793) 0:00:02.806 ******* 2026-02-15 03:11:38.058528 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:11:38.058534 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:11:38.058541 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:11:38.058553 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:11:38.058560 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:11:38.058567 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:11:38.058573 | orchestrator | 2026-02-15 03:11:38.058580 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-15 03:11:38.058588 | orchestrator | Sunday 15 February 2026 03:11:20 +0000 (0:00:02.228) 0:00:05.035 ******* 2026-02-15 03:11:38.058594 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:11:38.058603 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:11:38.058614 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:11:38.058625 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:11:38.058635 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:11:38.058646 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:11:38.058657 | orchestrator | 2026-02-15 03:11:38.058668 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-15 03:11:38.058678 | orchestrator | Sunday 15 February 2026 03:11:22 +0000 (0:00:01.974) 0:00:07.010 ******* 2026-02-15 03:11:38.058689 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:11:38.058723 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:11:38.058735 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:11:38.058742 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:11:38.058750 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:11:38.058757 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:11:38.058765 | orchestrator | 2026-02-15 03:11:38.058780 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-15 03:11:38.058788 | orchestrator | Sunday 15 February 2026 03:11:23 +0000 (0:00:01.393) 0:00:08.404 ******* 2026-02-15 03:11:38.058796 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:11:38.058804 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:11:38.058812 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:11:38.058819 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:11:38.058827 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:11:38.058835 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:11:38.058843 | orchestrator | 2026-02-15 03:11:38.058904 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-15 03:11:38.058918 | orchestrator | Sunday 15 February 2026 03:11:24 +0000 (0:00:01.006) 0:00:09.410 ******* 2026-02-15 03:11:38.058928 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:11:38.058939 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:11:38.058950 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:11:38.058961 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:11:38.058971 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:11:38.058982 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:11:38.058992 | orchestrator | 2026-02-15 03:11:38.059003 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-15 03:11:38.059013 | orchestrator | Sunday 15 February 2026 03:11:25 +0000 (0:00:00.704) 0:00:10.115 ******* 2026-02-15 03:11:38.059025 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 03:11:38.059036 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 03:11:38.059047 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:11:38.059058 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 03:11:38.059069 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 03:11:38.059079 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:11:38.059086 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 03:11:38.059093 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 03:11:38.059099 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:11:38.059106 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 03:11:38.059128 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 03:11:38.059135 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:11:38.059142 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 03:11:38.059148 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 03:11:38.059155 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:11:38.059162 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 03:11:38.059169 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 03:11:38.059175 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:11:38.059182 | orchestrator | 2026-02-15 03:11:38.059189 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-15 03:11:38.059195 | orchestrator | Sunday 15 February 2026 03:11:26 +0000 (0:00:00.703) 0:00:10.818 ******* 2026-02-15 03:11:38.059202 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:11:38.059209 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:11:38.059215 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:11:38.059231 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:11:38.059238 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:11:38.059244 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:11:38.059251 | orchestrator | 2026-02-15 03:11:38.059258 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-15 03:11:38.059265 | orchestrator | Sunday 15 February 2026 03:11:27 +0000 (0:00:01.488) 0:00:12.307 ******* 2026-02-15 03:11:38.059272 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:11:38.059279 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:11:38.059286 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:11:38.059293 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:11:38.059299 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:11:38.059306 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:11:38.059312 | orchestrator | 2026-02-15 03:11:38.059319 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-15 03:11:38.059326 | orchestrator | Sunday 15 February 2026 03:11:28 +0000 (0:00:00.866) 0:00:13.173 ******* 2026-02-15 03:11:38.059333 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:11:38.059339 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:11:38.059346 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:11:38.059353 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:11:38.059359 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:11:38.059366 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:11:38.059372 | orchestrator | 2026-02-15 03:11:38.059379 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-15 03:11:38.059386 | orchestrator | Sunday 15 February 2026 03:11:33 +0000 (0:00:05.609) 0:00:18.782 ******* 2026-02-15 03:11:38.059393 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:11:38.059405 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:11:38.059412 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:11:38.059418 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:11:38.059425 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:11:38.059432 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:11:38.059438 | orchestrator | 2026-02-15 03:11:38.059445 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-15 03:11:38.059452 | orchestrator | Sunday 15 February 2026 03:11:35 +0000 (0:00:01.027) 0:00:19.810 ******* 2026-02-15 03:11:38.059459 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:11:38.059465 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:11:38.059472 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:11:38.059479 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:11:38.059485 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:11:38.059492 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:11:38.059498 | orchestrator | 2026-02-15 03:11:38.059505 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-15 03:11:38.059514 | orchestrator | Sunday 15 February 2026 03:11:36 +0000 (0:00:01.453) 0:00:21.264 ******* 2026-02-15 03:11:38.059520 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:11:38.059527 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:11:38.059534 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:11:38.059540 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:11:38.059547 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:11:38.059554 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:11:38.059560 | orchestrator | 2026-02-15 03:11:38.059567 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-15 03:11:38.059574 | orchestrator | Sunday 15 February 2026 03:11:37 +0000 (0:00:00.694) 0:00:21.959 ******* 2026-02-15 03:11:38.059580 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-15 03:11:38.059592 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-15 03:11:38.059599 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:11:38.059605 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-15 03:11:38.059617 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-15 03:11:38.059624 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:11:38.059630 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-15 03:11:38.059637 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-15 03:11:38.059644 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:11:38.059650 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-15 03:11:38.059657 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-15 03:11:38.059664 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:11:38.059670 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-15 03:11:38.059680 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-15 03:11:38.059691 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:11:38.059706 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-15 03:11:38.059721 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-15 03:11:38.059731 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:11:38.059741 | orchestrator | 2026-02-15 03:11:38.059751 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-15 03:11:38.059768 | orchestrator | Sunday 15 February 2026 03:11:38 +0000 (0:00:00.877) 0:00:22.836 ******* 2026-02-15 03:13:03.231105 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:13:03.231264 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:13:03.231291 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:13:03.231309 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:13:03.231326 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:03.231392 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:03.231413 | orchestrator | 2026-02-15 03:13:03.231434 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-15 03:13:03.231453 | orchestrator | Sunday 15 February 2026 03:11:38 +0000 (0:00:00.616) 0:00:23.453 ******* 2026-02-15 03:13:03.231470 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:13:03.231487 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:13:03.231504 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:13:03.231521 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:13:03.231537 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:03.231551 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:03.231563 | orchestrator | 2026-02-15 03:13:03.231574 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-15 03:13:03.231586 | orchestrator | 2026-02-15 03:13:03.231598 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-15 03:13:03.231610 | orchestrator | Sunday 15 February 2026 03:11:39 +0000 (0:00:01.289) 0:00:24.742 ******* 2026-02-15 03:13:03.231622 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:03.231635 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:03.231646 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:03.231658 | orchestrator | 2026-02-15 03:13:03.231669 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-15 03:13:03.231681 | orchestrator | Sunday 15 February 2026 03:11:41 +0000 (0:00:01.665) 0:00:26.408 ******* 2026-02-15 03:13:03.231692 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:03.231703 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:03.231715 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:03.231725 | orchestrator | 2026-02-15 03:13:03.231736 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-15 03:13:03.231748 | orchestrator | Sunday 15 February 2026 03:11:42 +0000 (0:00:01.256) 0:00:27.665 ******* 2026-02-15 03:13:03.231759 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:03.231770 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:03.231781 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:03.231794 | orchestrator | 2026-02-15 03:13:03.231805 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-15 03:13:03.231840 | orchestrator | Sunday 15 February 2026 03:11:43 +0000 (0:00:00.987) 0:00:28.652 ******* 2026-02-15 03:13:03.231852 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:03.231862 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:03.231873 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:03.231884 | orchestrator | 2026-02-15 03:13:03.231896 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-15 03:13:03.231907 | orchestrator | Sunday 15 February 2026 03:11:44 +0000 (0:00:00.809) 0:00:29.462 ******* 2026-02-15 03:13:03.231919 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:13:03.231928 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:03.231938 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:03.231947 | orchestrator | 2026-02-15 03:13:03.231957 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-15 03:13:03.231984 | orchestrator | Sunday 15 February 2026 03:11:45 +0000 (0:00:00.349) 0:00:29.812 ******* 2026-02-15 03:13:03.231995 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:03.232004 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:03.232039 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:03.232049 | orchestrator | 2026-02-15 03:13:03.232059 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-15 03:13:03.232069 | orchestrator | Sunday 15 February 2026 03:11:46 +0000 (0:00:01.224) 0:00:31.036 ******* 2026-02-15 03:13:03.232079 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:03.232089 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:03.232099 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:03.232109 | orchestrator | 2026-02-15 03:13:03.232118 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-15 03:13:03.232128 | orchestrator | Sunday 15 February 2026 03:11:47 +0000 (0:00:01.412) 0:00:32.449 ******* 2026-02-15 03:13:03.232138 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:13:03.232147 | orchestrator | 2026-02-15 03:13:03.232157 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-15 03:13:03.232167 | orchestrator | Sunday 15 February 2026 03:11:48 +0000 (0:00:00.521) 0:00:32.970 ******* 2026-02-15 03:13:03.232176 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:03.232186 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:03.232195 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:03.232205 | orchestrator | 2026-02-15 03:13:03.232215 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-15 03:13:03.232224 | orchestrator | Sunday 15 February 2026 03:11:50 +0000 (0:00:02.314) 0:00:35.284 ******* 2026-02-15 03:13:03.232234 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:03.232244 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:03.232253 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:03.232263 | orchestrator | 2026-02-15 03:13:03.232272 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-15 03:13:03.232282 | orchestrator | Sunday 15 February 2026 03:11:50 +0000 (0:00:00.495) 0:00:35.780 ******* 2026-02-15 03:13:03.232292 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:03.232301 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:03.232311 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:03.232320 | orchestrator | 2026-02-15 03:13:03.232330 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-15 03:13:03.232339 | orchestrator | Sunday 15 February 2026 03:11:51 +0000 (0:00:00.996) 0:00:36.776 ******* 2026-02-15 03:13:03.232349 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:03.232359 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:03.232368 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:03.232378 | orchestrator | 2026-02-15 03:13:03.232388 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-15 03:13:03.232417 | orchestrator | Sunday 15 February 2026 03:11:53 +0000 (0:00:01.243) 0:00:38.020 ******* 2026-02-15 03:13:03.232427 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:13:03.232445 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:03.232455 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:03.232465 | orchestrator | 2026-02-15 03:13:03.232474 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-15 03:13:03.232492 | orchestrator | Sunday 15 February 2026 03:11:53 +0000 (0:00:00.634) 0:00:38.654 ******* 2026-02-15 03:13:03.232507 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:13:03.232535 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:03.232552 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:03.232567 | orchestrator | 2026-02-15 03:13:03.232584 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-15 03:13:03.232600 | orchestrator | Sunday 15 February 2026 03:11:54 +0000 (0:00:00.375) 0:00:39.030 ******* 2026-02-15 03:13:03.232615 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:03.232630 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:03.232645 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:03.232661 | orchestrator | 2026-02-15 03:13:03.232690 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-15 03:13:03.232708 | orchestrator | Sunday 15 February 2026 03:11:55 +0000 (0:00:01.232) 0:00:40.262 ******* 2026-02-15 03:13:03.232725 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:03.232742 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:03.232752 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:03.232761 | orchestrator | 2026-02-15 03:13:03.232771 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-15 03:13:03.232781 | orchestrator | Sunday 15 February 2026 03:11:58 +0000 (0:00:02.860) 0:00:43.123 ******* 2026-02-15 03:13:03.232790 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:03.232800 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:03.232810 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:03.232823 | orchestrator | 2026-02-15 03:13:03.232833 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-15 03:13:03.232843 | orchestrator | Sunday 15 February 2026 03:11:58 +0000 (0:00:00.328) 0:00:43.451 ******* 2026-02-15 03:13:03.232853 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-15 03:13:03.232865 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-15 03:13:03.232875 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-15 03:13:03.232885 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-15 03:13:03.232895 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-15 03:13:03.232904 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-15 03:13:03.232914 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-15 03:13:03.232923 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-15 03:13:03.232933 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-02-15 03:13:03.232950 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-15 03:13:03.232969 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-15 03:13:03.233004 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-02-15 03:13:03.233060 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-15 03:13:03.233076 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-15 03:13:03.233091 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2026-02-15 03:13:03.233106 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-02-15 03:13:03.233130 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-02-15 03:13:03.233147 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (15 retries left). 2026-02-15 03:13:03.233163 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:03.233179 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:03.233195 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:03.233210 | orchestrator | 2026-02-15 03:13:03.233240 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-15 03:13:46.384643 | orchestrator | Sunday 15 February 2026 03:13:03 +0000 (0:01:04.554) 0:01:48.006 ******* 2026-02-15 03:13:46.384737 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:13:46.384747 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:46.384754 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:46.384760 | orchestrator | 2026-02-15 03:13:46.384767 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-15 03:13:46.384774 | orchestrator | Sunday 15 February 2026 03:13:03 +0000 (0:00:00.318) 0:01:48.324 ******* 2026-02-15 03:13:46.384781 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:46.384787 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:46.384794 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:46.384800 | orchestrator | 2026-02-15 03:13:46.384807 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-15 03:13:46.384813 | orchestrator | Sunday 15 February 2026 03:13:04 +0000 (0:00:00.991) 0:01:49.316 ******* 2026-02-15 03:13:46.384819 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:46.384825 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:46.384832 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:46.384838 | orchestrator | 2026-02-15 03:13:46.384844 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-15 03:13:46.384850 | orchestrator | Sunday 15 February 2026 03:13:05 +0000 (0:00:01.207) 0:01:50.523 ******* 2026-02-15 03:13:46.384857 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:46.384863 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:46.384869 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:46.384875 | orchestrator | 2026-02-15 03:13:46.384881 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-15 03:13:46.384888 | orchestrator | Sunday 15 February 2026 03:13:32 +0000 (0:00:26.309) 0:02:16.833 ******* 2026-02-15 03:13:46.384894 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:46.384911 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:46.384928 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:46.384934 | orchestrator | 2026-02-15 03:13:46.384941 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-15 03:13:46.384947 | orchestrator | Sunday 15 February 2026 03:13:32 +0000 (0:00:00.640) 0:02:17.473 ******* 2026-02-15 03:13:46.384961 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:46.384967 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:46.384973 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:46.384979 | orchestrator | 2026-02-15 03:13:46.385002 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-15 03:13:46.385009 | orchestrator | Sunday 15 February 2026 03:13:33 +0000 (0:00:00.730) 0:02:18.203 ******* 2026-02-15 03:13:46.385016 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:46.385022 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:46.385028 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:46.385034 | orchestrator | 2026-02-15 03:13:46.385041 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-15 03:13:46.385047 | orchestrator | Sunday 15 February 2026 03:13:34 +0000 (0:00:00.648) 0:02:18.851 ******* 2026-02-15 03:13:46.385053 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:46.385060 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:46.385066 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:46.385072 | orchestrator | 2026-02-15 03:13:46.385079 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-15 03:13:46.385085 | orchestrator | Sunday 15 February 2026 03:13:34 +0000 (0:00:00.835) 0:02:19.687 ******* 2026-02-15 03:13:46.385132 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:46.385139 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:46.385145 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:46.385151 | orchestrator | 2026-02-15 03:13:46.385158 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-15 03:13:46.385164 | orchestrator | Sunday 15 February 2026 03:13:35 +0000 (0:00:00.346) 0:02:20.033 ******* 2026-02-15 03:13:46.385170 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:46.385177 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:46.385183 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:46.385189 | orchestrator | 2026-02-15 03:13:46.385196 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-15 03:13:46.385203 | orchestrator | Sunday 15 February 2026 03:13:35 +0000 (0:00:00.667) 0:02:20.700 ******* 2026-02-15 03:13:46.385211 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:46.385218 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:46.385225 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:46.385233 | orchestrator | 2026-02-15 03:13:46.385240 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-15 03:13:46.385250 | orchestrator | Sunday 15 February 2026 03:13:36 +0000 (0:00:00.710) 0:02:21.411 ******* 2026-02-15 03:13:46.385257 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:46.385264 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:46.385271 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:46.385278 | orchestrator | 2026-02-15 03:13:46.385285 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-15 03:13:46.385293 | orchestrator | Sunday 15 February 2026 03:13:37 +0000 (0:00:01.094) 0:02:22.506 ******* 2026-02-15 03:13:46.385300 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:13:46.385307 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:13:46.385314 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:13:46.385321 | orchestrator | 2026-02-15 03:13:46.385328 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-15 03:13:46.385335 | orchestrator | Sunday 15 February 2026 03:13:38 +0000 (0:00:00.869) 0:02:23.375 ******* 2026-02-15 03:13:46.385342 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:13:46.385348 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:46.385355 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:46.385362 | orchestrator | 2026-02-15 03:13:46.385369 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-15 03:13:46.385377 | orchestrator | Sunday 15 February 2026 03:13:38 +0000 (0:00:00.313) 0:02:23.689 ******* 2026-02-15 03:13:46.385384 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:13:46.385391 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:13:46.385398 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:13:46.385405 | orchestrator | 2026-02-15 03:13:46.385413 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-15 03:13:46.385440 | orchestrator | Sunday 15 February 2026 03:13:39 +0000 (0:00:00.300) 0:02:23.989 ******* 2026-02-15 03:13:46.385447 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:46.385455 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:46.385463 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:46.385470 | orchestrator | 2026-02-15 03:13:46.385477 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-15 03:13:46.385484 | orchestrator | Sunday 15 February 2026 03:13:39 +0000 (0:00:00.641) 0:02:24.630 ******* 2026-02-15 03:13:46.385492 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:13:46.385499 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:13:46.385506 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:13:46.385514 | orchestrator | 2026-02-15 03:13:46.385522 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-15 03:13:46.385531 | orchestrator | Sunday 15 February 2026 03:13:40 +0000 (0:00:00.893) 0:02:25.524 ******* 2026-02-15 03:13:46.385538 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-15 03:13:46.385546 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-15 03:13:46.385553 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-15 03:13:46.385560 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-15 03:13:46.385567 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-15 03:13:46.385575 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-15 03:13:46.385583 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-15 03:13:46.385591 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-15 03:13:46.385597 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-15 03:13:46.385604 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-15 03:13:46.385610 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-15 03:13:46.385616 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-15 03:13:46.385623 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-15 03:13:46.385629 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-15 03:13:46.385635 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-15 03:13:46.385642 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-15 03:13:46.385648 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-15 03:13:46.385654 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-15 03:13:46.385661 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-15 03:13:46.385667 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-15 03:13:46.385673 | orchestrator | 2026-02-15 03:13:46.385679 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-15 03:13:46.385686 | orchestrator | 2026-02-15 03:13:46.385707 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-15 03:13:46.385714 | orchestrator | Sunday 15 February 2026 03:13:43 +0000 (0:00:03.017) 0:02:28.541 ******* 2026-02-15 03:13:46.385720 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:13:46.385726 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:13:46.385739 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:13:46.385745 | orchestrator | 2026-02-15 03:13:46.385752 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-15 03:13:46.385758 | orchestrator | Sunday 15 February 2026 03:13:44 +0000 (0:00:00.323) 0:02:28.865 ******* 2026-02-15 03:13:46.385764 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:13:46.385771 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:13:46.385777 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:13:46.385783 | orchestrator | 2026-02-15 03:13:46.385789 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-15 03:13:46.385796 | orchestrator | Sunday 15 February 2026 03:13:44 +0000 (0:00:00.926) 0:02:29.791 ******* 2026-02-15 03:13:46.385802 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:13:46.385808 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:13:46.385815 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:13:46.385821 | orchestrator | 2026-02-15 03:13:46.385828 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-15 03:13:46.385834 | orchestrator | Sunday 15 February 2026 03:13:45 +0000 (0:00:00.323) 0:02:30.115 ******* 2026-02-15 03:13:46.385840 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:13:46.385847 | orchestrator | 2026-02-15 03:13:46.385853 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-15 03:13:46.385860 | orchestrator | Sunday 15 February 2026 03:13:45 +0000 (0:00:00.480) 0:02:30.595 ******* 2026-02-15 03:13:46.385866 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:13:46.385872 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:13:46.385879 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:13:46.385885 | orchestrator | 2026-02-15 03:13:46.385895 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-15 03:14:45.797335 | orchestrator | Sunday 15 February 2026 03:13:46 +0000 (0:00:00.571) 0:02:31.167 ******* 2026-02-15 03:14:45.797466 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:14:45.797488 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:14:45.797502 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:14:45.797516 | orchestrator | 2026-02-15 03:14:45.797530 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-15 03:14:45.797544 | orchestrator | Sunday 15 February 2026 03:13:46 +0000 (0:00:00.302) 0:02:31.469 ******* 2026-02-15 03:14:45.797556 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:14:45.797568 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:14:45.797581 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:14:45.797593 | orchestrator | 2026-02-15 03:14:45.797606 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-15 03:14:45.797620 | orchestrator | Sunday 15 February 2026 03:13:46 +0000 (0:00:00.291) 0:02:31.761 ******* 2026-02-15 03:14:45.797633 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:14:45.797644 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:14:45.797655 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:14:45.797667 | orchestrator | 2026-02-15 03:14:45.797679 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-15 03:14:45.797692 | orchestrator | Sunday 15 February 2026 03:13:47 +0000 (0:00:00.640) 0:02:32.402 ******* 2026-02-15 03:14:45.797704 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:14:45.797716 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:14:45.797727 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:14:45.797740 | orchestrator | 2026-02-15 03:14:45.797753 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-15 03:14:45.797766 | orchestrator | Sunday 15 February 2026 03:13:49 +0000 (0:00:01.413) 0:02:33.815 ******* 2026-02-15 03:14:45.797779 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:14:45.797792 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:14:45.797804 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:14:45.797817 | orchestrator | 2026-02-15 03:14:45.797860 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-15 03:14:45.797874 | orchestrator | Sunday 15 February 2026 03:13:50 +0000 (0:00:01.293) 0:02:35.108 ******* 2026-02-15 03:14:45.797887 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:14:45.797899 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:14:45.797912 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:14:45.797923 | orchestrator | 2026-02-15 03:14:45.797937 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-15 03:14:45.797950 | orchestrator | 2026-02-15 03:14:45.797963 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-15 03:14:45.797976 | orchestrator | Sunday 15 February 2026 03:14:00 +0000 (0:00:10.298) 0:02:45.406 ******* 2026-02-15 03:14:45.798006 | orchestrator | ok: [testbed-manager] 2026-02-15 03:14:45.798086 | orchestrator | 2026-02-15 03:14:45.798102 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-15 03:14:45.798116 | orchestrator | Sunday 15 February 2026 03:14:01 +0000 (0:00:01.018) 0:02:46.425 ******* 2026-02-15 03:14:45.798166 | orchestrator | changed: [testbed-manager] 2026-02-15 03:14:45.798180 | orchestrator | 2026-02-15 03:14:45.798194 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-15 03:14:45.798236 | orchestrator | Sunday 15 February 2026 03:14:02 +0000 (0:00:00.471) 0:02:46.896 ******* 2026-02-15 03:14:45.798250 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-15 03:14:45.798263 | orchestrator | 2026-02-15 03:14:45.798276 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-15 03:14:45.798290 | orchestrator | Sunday 15 February 2026 03:14:02 +0000 (0:00:00.557) 0:02:47.454 ******* 2026-02-15 03:14:45.798303 | orchestrator | changed: [testbed-manager] 2026-02-15 03:14:45.798316 | orchestrator | 2026-02-15 03:14:45.798329 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-15 03:14:45.798343 | orchestrator | Sunday 15 February 2026 03:14:03 +0000 (0:00:00.924) 0:02:48.378 ******* 2026-02-15 03:14:45.798356 | orchestrator | changed: [testbed-manager] 2026-02-15 03:14:45.798369 | orchestrator | 2026-02-15 03:14:45.798382 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-15 03:14:45.798395 | orchestrator | Sunday 15 February 2026 03:14:04 +0000 (0:00:00.603) 0:02:48.982 ******* 2026-02-15 03:14:45.798408 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-15 03:14:45.798421 | orchestrator | 2026-02-15 03:14:45.798454 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-15 03:14:45.798468 | orchestrator | Sunday 15 February 2026 03:14:05 +0000 (0:00:01.708) 0:02:50.690 ******* 2026-02-15 03:14:45.798482 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-15 03:14:45.798495 | orchestrator | 2026-02-15 03:14:45.798508 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-15 03:14:45.798521 | orchestrator | Sunday 15 February 2026 03:14:06 +0000 (0:00:00.863) 0:02:51.554 ******* 2026-02-15 03:14:45.798534 | orchestrator | changed: [testbed-manager] 2026-02-15 03:14:45.798548 | orchestrator | 2026-02-15 03:14:45.798561 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-15 03:14:45.798574 | orchestrator | Sunday 15 February 2026 03:14:07 +0000 (0:00:00.460) 0:02:52.014 ******* 2026-02-15 03:14:45.798587 | orchestrator | changed: [testbed-manager] 2026-02-15 03:14:45.798596 | orchestrator | 2026-02-15 03:14:45.798605 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-15 03:14:45.798613 | orchestrator | 2026-02-15 03:14:45.798621 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-15 03:14:45.798629 | orchestrator | Sunday 15 February 2026 03:14:07 +0000 (0:00:00.459) 0:02:52.474 ******* 2026-02-15 03:14:45.798637 | orchestrator | ok: [testbed-manager] 2026-02-15 03:14:45.798645 | orchestrator | 2026-02-15 03:14:45.798652 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-15 03:14:45.798658 | orchestrator | Sunday 15 February 2026 03:14:08 +0000 (0:00:00.428) 0:02:52.902 ******* 2026-02-15 03:14:45.798681 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-15 03:14:45.798689 | orchestrator | 2026-02-15 03:14:45.798716 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-15 03:14:45.798724 | orchestrator | Sunday 15 February 2026 03:14:08 +0000 (0:00:00.254) 0:02:53.157 ******* 2026-02-15 03:14:45.798730 | orchestrator | ok: [testbed-manager] 2026-02-15 03:14:45.798737 | orchestrator | 2026-02-15 03:14:45.798744 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-15 03:14:45.798750 | orchestrator | Sunday 15 February 2026 03:14:09 +0000 (0:00:00.869) 0:02:54.026 ******* 2026-02-15 03:14:45.798757 | orchestrator | ok: [testbed-manager] 2026-02-15 03:14:45.798764 | orchestrator | 2026-02-15 03:14:45.798770 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-15 03:14:45.798777 | orchestrator | Sunday 15 February 2026 03:14:11 +0000 (0:00:01.818) 0:02:55.844 ******* 2026-02-15 03:14:45.798784 | orchestrator | changed: [testbed-manager] 2026-02-15 03:14:45.798790 | orchestrator | 2026-02-15 03:14:45.798797 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-15 03:14:45.798804 | orchestrator | Sunday 15 February 2026 03:14:11 +0000 (0:00:00.833) 0:02:56.678 ******* 2026-02-15 03:14:45.798810 | orchestrator | ok: [testbed-manager] 2026-02-15 03:14:45.798817 | orchestrator | 2026-02-15 03:14:45.798823 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-15 03:14:45.798830 | orchestrator | Sunday 15 February 2026 03:14:12 +0000 (0:00:00.489) 0:02:57.167 ******* 2026-02-15 03:14:45.798837 | orchestrator | changed: [testbed-manager] 2026-02-15 03:14:45.798843 | orchestrator | 2026-02-15 03:14:45.798850 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-15 03:14:45.798856 | orchestrator | Sunday 15 February 2026 03:14:20 +0000 (0:00:08.159) 0:03:05.327 ******* 2026-02-15 03:14:45.798863 | orchestrator | changed: [testbed-manager] 2026-02-15 03:14:45.798870 | orchestrator | 2026-02-15 03:14:45.798876 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-15 03:14:45.798883 | orchestrator | Sunday 15 February 2026 03:14:33 +0000 (0:00:13.182) 0:03:18.509 ******* 2026-02-15 03:14:45.798889 | orchestrator | ok: [testbed-manager] 2026-02-15 03:14:45.798896 | orchestrator | 2026-02-15 03:14:45.798905 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-15 03:14:45.798916 | orchestrator | 2026-02-15 03:14:45.798926 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-15 03:14:45.798937 | orchestrator | Sunday 15 February 2026 03:14:34 +0000 (0:00:00.793) 0:03:19.303 ******* 2026-02-15 03:14:45.798947 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:14:45.798958 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:14:45.798968 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:14:45.798978 | orchestrator | 2026-02-15 03:14:45.798985 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-15 03:14:45.798992 | orchestrator | Sunday 15 February 2026 03:14:34 +0000 (0:00:00.321) 0:03:19.624 ******* 2026-02-15 03:14:45.798998 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:14:45.799005 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:14:45.799012 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:14:45.799018 | orchestrator | 2026-02-15 03:14:45.799025 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-15 03:14:45.799032 | orchestrator | Sunday 15 February 2026 03:14:35 +0000 (0:00:00.321) 0:03:19.946 ******* 2026-02-15 03:14:45.799038 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:14:45.799045 | orchestrator | 2026-02-15 03:14:45.799052 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-15 03:14:45.799059 | orchestrator | Sunday 15 February 2026 03:14:35 +0000 (0:00:00.790) 0:03:20.737 ******* 2026-02-15 03:14:45.799065 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-15 03:14:45.799079 | orchestrator | 2026-02-15 03:14:45.799086 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-15 03:14:45.799093 | orchestrator | Sunday 15 February 2026 03:14:36 +0000 (0:00:00.853) 0:03:21.590 ******* 2026-02-15 03:14:45.799099 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 03:14:45.799106 | orchestrator | 2026-02-15 03:14:45.799113 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-15 03:14:45.799119 | orchestrator | Sunday 15 February 2026 03:14:37 +0000 (0:00:00.885) 0:03:22.475 ******* 2026-02-15 03:14:45.799126 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:14:45.799132 | orchestrator | 2026-02-15 03:14:45.799139 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-15 03:14:45.799145 | orchestrator | Sunday 15 February 2026 03:14:37 +0000 (0:00:00.135) 0:03:22.611 ******* 2026-02-15 03:14:45.799152 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 03:14:45.799159 | orchestrator | 2026-02-15 03:14:45.799186 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-15 03:14:45.799193 | orchestrator | Sunday 15 February 2026 03:14:38 +0000 (0:00:01.054) 0:03:23.666 ******* 2026-02-15 03:14:45.799218 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:14:45.799231 | orchestrator | 2026-02-15 03:14:45.799238 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-15 03:14:45.799245 | orchestrator | Sunday 15 February 2026 03:14:39 +0000 (0:00:00.134) 0:03:23.800 ******* 2026-02-15 03:14:45.799252 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:14:45.799258 | orchestrator | 2026-02-15 03:14:45.799265 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-15 03:14:45.799272 | orchestrator | Sunday 15 February 2026 03:14:39 +0000 (0:00:00.133) 0:03:23.934 ******* 2026-02-15 03:14:45.799285 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:14:45.799292 | orchestrator | 2026-02-15 03:14:45.799299 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-15 03:14:45.799306 | orchestrator | Sunday 15 February 2026 03:14:39 +0000 (0:00:00.123) 0:03:24.058 ******* 2026-02-15 03:14:45.799313 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:14:45.799319 | orchestrator | 2026-02-15 03:14:45.799326 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-15 03:14:45.799333 | orchestrator | Sunday 15 February 2026 03:14:39 +0000 (0:00:00.125) 0:03:24.183 ******* 2026-02-15 03:14:45.799346 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-15 03:15:53.055118 | orchestrator | 2026-02-15 03:15:53.055223 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-15 03:15:53.055236 | orchestrator | Sunday 15 February 2026 03:14:45 +0000 (0:00:06.389) 0:03:30.572 ******* 2026-02-15 03:15:53.055246 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-15 03:15:53.055255 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-02-15 03:15:53.055266 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-15 03:15:53.055276 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-15 03:15:53.055285 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-15 03:15:53.055294 | orchestrator | 2026-02-15 03:15:53.055303 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-15 03:15:53.055311 | orchestrator | Sunday 15 February 2026 03:15:28 +0000 (0:00:42.465) 0:04:13.038 ******* 2026-02-15 03:15:53.055385 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 03:15:53.055394 | orchestrator | 2026-02-15 03:15:53.055400 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-15 03:15:53.055405 | orchestrator | Sunday 15 February 2026 03:15:29 +0000 (0:00:01.404) 0:04:14.442 ******* 2026-02-15 03:15:53.055411 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-15 03:15:53.055416 | orchestrator | 2026-02-15 03:15:53.055441 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-15 03:15:53.055446 | orchestrator | Sunday 15 February 2026 03:15:31 +0000 (0:00:01.905) 0:04:16.348 ******* 2026-02-15 03:15:53.055452 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-15 03:15:53.055457 | orchestrator | 2026-02-15 03:15:53.055462 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-15 03:15:53.055468 | orchestrator | Sunday 15 February 2026 03:15:32 +0000 (0:00:01.121) 0:04:17.470 ******* 2026-02-15 03:15:53.055473 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:15:53.055478 | orchestrator | 2026-02-15 03:15:53.055484 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-15 03:15:53.055489 | orchestrator | Sunday 15 February 2026 03:15:32 +0000 (0:00:00.123) 0:04:17.593 ******* 2026-02-15 03:15:53.055494 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-15 03:15:53.055500 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-15 03:15:53.055505 | orchestrator | 2026-02-15 03:15:53.055510 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-15 03:15:53.055515 | orchestrator | Sunday 15 February 2026 03:15:34 +0000 (0:00:02.010) 0:04:19.603 ******* 2026-02-15 03:15:53.055520 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:15:53.055525 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:15:53.055530 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:15:53.055535 | orchestrator | 2026-02-15 03:15:53.055541 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-15 03:15:53.055546 | orchestrator | Sunday 15 February 2026 03:15:35 +0000 (0:00:00.394) 0:04:19.998 ******* 2026-02-15 03:15:53.055551 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:15:53.055556 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:15:53.055561 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:15:53.055566 | orchestrator | 2026-02-15 03:15:53.055571 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-15 03:15:53.055576 | orchestrator | 2026-02-15 03:15:53.055581 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-15 03:15:53.055587 | orchestrator | Sunday 15 February 2026 03:15:36 +0000 (0:00:00.895) 0:04:20.894 ******* 2026-02-15 03:15:53.055592 | orchestrator | ok: [testbed-manager] 2026-02-15 03:15:53.055597 | orchestrator | 2026-02-15 03:15:53.055602 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-15 03:15:53.055609 | orchestrator | Sunday 15 February 2026 03:15:36 +0000 (0:00:00.375) 0:04:21.269 ******* 2026-02-15 03:15:53.055617 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-15 03:15:53.055626 | orchestrator | 2026-02-15 03:15:53.055634 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-15 03:15:53.055643 | orchestrator | Sunday 15 February 2026 03:15:36 +0000 (0:00:00.247) 0:04:21.517 ******* 2026-02-15 03:15:53.055650 | orchestrator | changed: [testbed-manager] 2026-02-15 03:15:53.055660 | orchestrator | 2026-02-15 03:15:53.055668 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-15 03:15:53.055675 | orchestrator | 2026-02-15 03:15:53.055684 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-15 03:15:53.055693 | orchestrator | Sunday 15 February 2026 03:15:42 +0000 (0:00:05.456) 0:04:26.974 ******* 2026-02-15 03:15:53.055702 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:15:53.055710 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:15:53.055719 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:15:53.055727 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:15:53.055736 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:15:53.055743 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:15:53.055748 | orchestrator | 2026-02-15 03:15:53.055753 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-15 03:15:53.055759 | orchestrator | Sunday 15 February 2026 03:15:42 +0000 (0:00:00.810) 0:04:27.784 ******* 2026-02-15 03:15:53.055770 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-15 03:15:53.055777 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-15 03:15:53.055785 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-15 03:15:53.055794 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-15 03:15:53.055821 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-15 03:15:53.055830 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-15 03:15:53.055839 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-15 03:15:53.055847 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-15 03:15:53.055856 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-15 03:15:53.055864 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-15 03:15:53.055875 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-15 03:15:53.055884 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-15 03:15:53.055893 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-15 03:15:53.055901 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-15 03:15:53.055927 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-15 03:15:53.055937 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-15 03:15:53.055946 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-15 03:15:53.055955 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-15 03:15:53.055963 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-15 03:15:53.055971 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-15 03:15:53.055979 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-15 03:15:53.055984 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-15 03:15:53.055990 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-15 03:15:53.055995 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-15 03:15:53.056000 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-15 03:15:53.056006 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-15 03:15:53.056011 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-15 03:15:53.056016 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-15 03:15:53.056021 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-15 03:15:53.056027 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-15 03:15:53.056032 | orchestrator | 2026-02-15 03:15:53.056037 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-15 03:15:53.056042 | orchestrator | Sunday 15 February 2026 03:15:51 +0000 (0:00:08.709) 0:04:36.493 ******* 2026-02-15 03:15:53.056047 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:15:53.056052 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:15:53.056057 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:15:53.056067 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:15:53.056072 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:15:53.056077 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:15:53.056083 | orchestrator | 2026-02-15 03:15:53.056091 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-15 03:15:53.056096 | orchestrator | Sunday 15 February 2026 03:15:52 +0000 (0:00:00.595) 0:04:37.089 ******* 2026-02-15 03:15:53.056101 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:15:53.056106 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:15:53.056112 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:15:53.056117 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:15:53.056122 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:15:53.056127 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:15:53.056132 | orchestrator | 2026-02-15 03:15:53.056137 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:15:53.056143 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:15:53.056150 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-15 03:15:53.056156 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-15 03:15:53.056161 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-15 03:15:53.056166 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-15 03:15:53.056176 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-15 03:15:53.481690 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-15 03:15:53.481761 | orchestrator | 2026-02-15 03:15:53.481767 | orchestrator | 2026-02-15 03:15:53.481773 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:15:53.481780 | orchestrator | Sunday 15 February 2026 03:15:53 +0000 (0:00:00.747) 0:04:37.836 ******* 2026-02-15 03:15:53.481785 | orchestrator | =============================================================================== 2026-02-15 03:15:53.481790 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 64.55s 2026-02-15 03:15:53.481795 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 42.47s 2026-02-15 03:15:53.481800 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.31s 2026-02-15 03:15:53.481804 | orchestrator | kubectl : Install required packages ------------------------------------ 13.18s 2026-02-15 03:15:53.481808 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.30s 2026-02-15 03:15:53.481812 | orchestrator | Manage labels ----------------------------------------------------------- 8.71s 2026-02-15 03:15:53.481816 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 8.16s 2026-02-15 03:15:53.481820 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 6.39s 2026-02-15 03:15:53.481823 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.61s 2026-02-15 03:15:53.481827 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.46s 2026-02-15 03:15:53.481832 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.02s 2026-02-15 03:15:53.481836 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.86s 2026-02-15 03:15:53.481857 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.31s 2026-02-15 03:15:53.481861 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.23s 2026-02-15 03:15:53.481865 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.01s 2026-02-15 03:15:53.481869 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.98s 2026-02-15 03:15:53.481873 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 1.91s 2026-02-15 03:15:53.481877 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.82s 2026-02-15 03:15:53.481881 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.71s 2026-02-15 03:15:53.481885 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.67s 2026-02-15 03:15:53.863752 | orchestrator | + osism apply copy-kubeconfig 2026-02-15 03:16:06.118724 | orchestrator | 2026-02-15 03:16:06 | INFO  | Task f5edddeb-35bc-4320-92f0-ab434de5a99f (copy-kubeconfig) was prepared for execution. 2026-02-15 03:16:06.118819 | orchestrator | 2026-02-15 03:16:06 | INFO  | It takes a moment until task f5edddeb-35bc-4320-92f0-ab434de5a99f (copy-kubeconfig) has been started and output is visible here. 2026-02-15 03:16:13.850833 | orchestrator | 2026-02-15 03:16:13.851004 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-02-15 03:16:13.851031 | orchestrator | 2026-02-15 03:16:13.851049 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-15 03:16:13.851067 | orchestrator | Sunday 15 February 2026 03:16:10 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-02-15 03:16:13.851087 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-15 03:16:13.851105 | orchestrator | 2026-02-15 03:16:13.851124 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-15 03:16:13.851168 | orchestrator | Sunday 15 February 2026 03:16:11 +0000 (0:00:00.935) 0:00:01.122 ******* 2026-02-15 03:16:13.851189 | orchestrator | changed: [testbed-manager] 2026-02-15 03:16:13.851233 | orchestrator | 2026-02-15 03:16:13.851253 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-02-15 03:16:13.851277 | orchestrator | Sunday 15 February 2026 03:16:12 +0000 (0:00:01.288) 0:00:02.410 ******* 2026-02-15 03:16:13.851296 | orchestrator | changed: [testbed-manager] 2026-02-15 03:16:13.851315 | orchestrator | 2026-02-15 03:16:13.851335 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:16:13.851353 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:16:13.851406 | orchestrator | 2026-02-15 03:16:13.851424 | orchestrator | 2026-02-15 03:16:13.851443 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:16:13.851462 | orchestrator | Sunday 15 February 2026 03:16:13 +0000 (0:00:00.507) 0:00:02.918 ******* 2026-02-15 03:16:13.851480 | orchestrator | =============================================================================== 2026-02-15 03:16:13.851517 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.29s 2026-02-15 03:16:13.851551 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.94s 2026-02-15 03:16:13.851569 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.51s 2026-02-15 03:16:14.243758 | orchestrator | + sh -c /opt/configuration/scripts/deploy/200-infrastructure.sh 2026-02-15 03:16:26.640721 | orchestrator | 2026-02-15 03:16:26 | INFO  | Task 8841adc2-8680-47c1-9a20-ff7218c6f4d6 (openstackclient) was prepared for execution. 2026-02-15 03:16:26.640834 | orchestrator | 2026-02-15 03:16:26 | INFO  | It takes a moment until task 8841adc2-8680-47c1-9a20-ff7218c6f4d6 (openstackclient) has been started and output is visible here. 2026-02-15 03:17:16.179031 | orchestrator | 2026-02-15 03:17:16.179162 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-15 03:17:16.179220 | orchestrator | 2026-02-15 03:17:16.179251 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-15 03:17:16.179269 | orchestrator | Sunday 15 February 2026 03:16:31 +0000 (0:00:00.273) 0:00:00.274 ******* 2026-02-15 03:17:16.179288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-15 03:17:16.179308 | orchestrator | 2026-02-15 03:17:16.179326 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-15 03:17:16.179342 | orchestrator | Sunday 15 February 2026 03:16:31 +0000 (0:00:00.258) 0:00:00.533 ******* 2026-02-15 03:17:16.179361 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-15 03:17:16.179380 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-15 03:17:16.179398 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-15 03:17:16.179415 | orchestrator | 2026-02-15 03:17:16.179432 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-15 03:17:16.179449 | orchestrator | Sunday 15 February 2026 03:16:32 +0000 (0:00:01.366) 0:00:01.899 ******* 2026-02-15 03:17:16.179467 | orchestrator | changed: [testbed-manager] 2026-02-15 03:17:16.179520 | orchestrator | 2026-02-15 03:17:16.179539 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-15 03:17:16.179559 | orchestrator | Sunday 15 February 2026 03:16:34 +0000 (0:00:01.558) 0:00:03.457 ******* 2026-02-15 03:17:16.179576 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-02-15 03:17:16.179594 | orchestrator | ok: [testbed-manager] 2026-02-15 03:17:16.179612 | orchestrator | 2026-02-15 03:17:16.179631 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-15 03:17:16.179650 | orchestrator | Sunday 15 February 2026 03:17:10 +0000 (0:00:36.058) 0:00:39.515 ******* 2026-02-15 03:17:16.179669 | orchestrator | changed: [testbed-manager] 2026-02-15 03:17:16.179689 | orchestrator | 2026-02-15 03:17:16.179708 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-15 03:17:16.179726 | orchestrator | Sunday 15 February 2026 03:17:11 +0000 (0:00:01.022) 0:00:40.538 ******* 2026-02-15 03:17:16.179743 | orchestrator | ok: [testbed-manager] 2026-02-15 03:17:16.179760 | orchestrator | 2026-02-15 03:17:16.179777 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-15 03:17:16.179795 | orchestrator | Sunday 15 February 2026 03:17:12 +0000 (0:00:00.737) 0:00:41.275 ******* 2026-02-15 03:17:16.179814 | orchestrator | changed: [testbed-manager] 2026-02-15 03:17:16.179833 | orchestrator | 2026-02-15 03:17:16.179853 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-15 03:17:16.179871 | orchestrator | Sunday 15 February 2026 03:17:13 +0000 (0:00:01.538) 0:00:42.813 ******* 2026-02-15 03:17:16.179890 | orchestrator | changed: [testbed-manager] 2026-02-15 03:17:16.179910 | orchestrator | 2026-02-15 03:17:16.179928 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-15 03:17:16.179947 | orchestrator | Sunday 15 February 2026 03:17:14 +0000 (0:00:00.769) 0:00:43.583 ******* 2026-02-15 03:17:16.179965 | orchestrator | changed: [testbed-manager] 2026-02-15 03:17:16.179983 | orchestrator | 2026-02-15 03:17:16.179995 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-15 03:17:16.180005 | orchestrator | Sunday 15 February 2026 03:17:15 +0000 (0:00:00.605) 0:00:44.189 ******* 2026-02-15 03:17:16.180016 | orchestrator | ok: [testbed-manager] 2026-02-15 03:17:16.180027 | orchestrator | 2026-02-15 03:17:16.180038 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:17:16.180050 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:17:16.180062 | orchestrator | 2026-02-15 03:17:16.180073 | orchestrator | 2026-02-15 03:17:16.180099 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:17:16.180110 | orchestrator | Sunday 15 February 2026 03:17:15 +0000 (0:00:00.401) 0:00:44.591 ******* 2026-02-15 03:17:16.180121 | orchestrator | =============================================================================== 2026-02-15 03:17:16.180132 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 36.06s 2026-02-15 03:17:16.180142 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.56s 2026-02-15 03:17:16.180153 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.54s 2026-02-15 03:17:16.180163 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.37s 2026-02-15 03:17:16.180174 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.02s 2026-02-15 03:17:16.180184 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.77s 2026-02-15 03:17:16.180195 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.74s 2026-02-15 03:17:16.180205 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.61s 2026-02-15 03:17:16.180216 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.40s 2026-02-15 03:17:16.180227 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.26s 2026-02-15 03:17:18.700068 | orchestrator | 2026-02-15 03:17:18 | INFO  | Task 5d55c4df-e1b9-4264-9092-721a11773f5b (common) was prepared for execution. 2026-02-15 03:17:18.700189 | orchestrator | 2026-02-15 03:17:18 | INFO  | It takes a moment until task 5d55c4df-e1b9-4264-9092-721a11773f5b (common) has been started and output is visible here. 2026-02-15 03:17:31.574183 | orchestrator | 2026-02-15 03:17:31.574314 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-15 03:17:31.574338 | orchestrator | 2026-02-15 03:17:31.574355 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-15 03:17:31.574371 | orchestrator | Sunday 15 February 2026 03:17:23 +0000 (0:00:00.297) 0:00:00.297 ******* 2026-02-15 03:17:31.574387 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:17:31.574402 | orchestrator | 2026-02-15 03:17:31.574418 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-15 03:17:31.574432 | orchestrator | Sunday 15 February 2026 03:17:24 +0000 (0:00:01.369) 0:00:01.666 ******* 2026-02-15 03:17:31.574446 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 03:17:31.574460 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 03:17:31.574474 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 03:17:31.574486 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 03:17:31.574558 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 03:17:31.574586 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 03:17:31.574600 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 03:17:31.574616 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 03:17:31.574629 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 03:17:31.574666 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 03:17:31.574681 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 03:17:31.574695 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 03:17:31.574707 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 03:17:31.574745 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 03:17:31.574759 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 03:17:31.574774 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 03:17:31.574787 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 03:17:31.574800 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 03:17:31.574813 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 03:17:31.574826 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 03:17:31.574839 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 03:17:31.574852 | orchestrator | 2026-02-15 03:17:31.574870 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-15 03:17:31.574884 | orchestrator | Sunday 15 February 2026 03:17:27 +0000 (0:00:02.758) 0:00:04.424 ******* 2026-02-15 03:17:31.574911 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:17:31.574926 | orchestrator | 2026-02-15 03:17:31.574960 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-15 03:17:31.574975 | orchestrator | Sunday 15 February 2026 03:17:28 +0000 (0:00:01.490) 0:00:05.915 ******* 2026-02-15 03:17:31.574994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:31.575015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:31.575051 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:31.575065 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:31.575078 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:31.575100 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:31.575114 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:31.575128 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:31.575143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:31.575163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845790 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845813 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845839 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845876 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:32.845910 | orchestrator | 2026-02-15 03:17:32.845920 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-15 03:17:32.845930 | orchestrator | Sunday 15 February 2026 03:17:32 +0000 (0:00:03.779) 0:00:09.694 ******* 2026-02-15 03:17:32.845940 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:32.845949 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:32.845958 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:32.845966 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:17:32.845976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:32.845995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:33.476970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:33.477084 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:17:33.477165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:33.477191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:33.477220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:33.477236 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:17:33.477250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:33.477266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:33.477281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:33.477342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:33.477361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:33.477374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:33.477388 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:17:33.477402 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:17:33.477415 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:33.477438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:33.477454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:33.477468 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:17:33.477483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:33.477554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:34.430566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:34.430655 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:17:34.430672 | orchestrator | 2026-02-15 03:17:34.430696 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-15 03:17:34.430712 | orchestrator | Sunday 15 February 2026 03:17:33 +0000 (0:00:00.958) 0:00:10.653 ******* 2026-02-15 03:17:34.430728 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:34.430744 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:34.430758 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:34.430770 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:17:34.430805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:34.430843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:34.430858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:34.430899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:34.430913 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:17:34.430926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:34.430939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:34.430953 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:17:34.430972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:34.430985 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:34.431009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:34.431022 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:17:34.431036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:34.431075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:39.697458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:39.697664 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:17:39.697680 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:39.697690 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:39.697699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:39.697706 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:17:39.697729 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 03:17:39.697736 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:39.697743 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:39.697749 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:17:39.697756 | orchestrator | 2026-02-15 03:17:39.697764 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-15 03:17:39.697771 | orchestrator | Sunday 15 February 2026 03:17:35 +0000 (0:00:01.893) 0:00:12.546 ******* 2026-02-15 03:17:39.697778 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:17:39.697784 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:17:39.697790 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:17:39.697796 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:17:39.697816 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:17:39.697823 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:17:39.697829 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:17:39.697837 | orchestrator | 2026-02-15 03:17:39.697848 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-15 03:17:39.697859 | orchestrator | Sunday 15 February 2026 03:17:36 +0000 (0:00:00.776) 0:00:13.322 ******* 2026-02-15 03:17:39.697869 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:17:39.697879 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:17:39.697889 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:17:39.697900 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:17:39.697909 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:17:39.697920 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:17:39.697930 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:17:39.697939 | orchestrator | 2026-02-15 03:17:39.697945 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-15 03:17:39.697951 | orchestrator | Sunday 15 February 2026 03:17:37 +0000 (0:00:00.905) 0:00:14.228 ******* 2026-02-15 03:17:39.697958 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:39.697979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:39.697996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:39.698003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:39.698011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:39.698067 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:39.698083 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:42.565672 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565836 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565862 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565871 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565901 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565921 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:42.565927 | orchestrator | 2026-02-15 03:17:42.565935 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-15 03:17:42.565943 | orchestrator | Sunday 15 February 2026 03:17:40 +0000 (0:00:03.501) 0:00:17.730 ******* 2026-02-15 03:17:42.565950 | orchestrator | [WARNING]: Skipped 2026-02-15 03:17:42.565957 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-15 03:17:42.565964 | orchestrator | to this access issue: 2026-02-15 03:17:42.565971 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-15 03:17:42.565978 | orchestrator | directory 2026-02-15 03:17:42.565984 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 03:17:42.565992 | orchestrator | 2026-02-15 03:17:42.565998 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-15 03:17:42.566004 | orchestrator | Sunday 15 February 2026 03:17:41 +0000 (0:00:01.019) 0:00:18.749 ******* 2026-02-15 03:17:42.566011 | orchestrator | [WARNING]: Skipped 2026-02-15 03:17:42.566059 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-15 03:17:52.898859 | orchestrator | to this access issue: 2026-02-15 03:17:52.898979 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-15 03:17:52.898992 | orchestrator | directory 2026-02-15 03:17:52.899002 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 03:17:52.899012 | orchestrator | 2026-02-15 03:17:52.899022 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-15 03:17:52.899031 | orchestrator | Sunday 15 February 2026 03:17:42 +0000 (0:00:01.293) 0:00:20.043 ******* 2026-02-15 03:17:52.899039 | orchestrator | [WARNING]: Skipped 2026-02-15 03:17:52.899047 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-15 03:17:52.899056 | orchestrator | to this access issue: 2026-02-15 03:17:52.899064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-15 03:17:52.899072 | orchestrator | directory 2026-02-15 03:17:52.899080 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 03:17:52.899088 | orchestrator | 2026-02-15 03:17:52.899096 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-15 03:17:52.899104 | orchestrator | Sunday 15 February 2026 03:17:43 +0000 (0:00:00.894) 0:00:20.937 ******* 2026-02-15 03:17:52.899112 | orchestrator | [WARNING]: Skipped 2026-02-15 03:17:52.899120 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-15 03:17:52.899128 | orchestrator | to this access issue: 2026-02-15 03:17:52.899136 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-15 03:17:52.899144 | orchestrator | directory 2026-02-15 03:17:52.899152 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 03:17:52.899160 | orchestrator | 2026-02-15 03:17:52.899168 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-15 03:17:52.899176 | orchestrator | Sunday 15 February 2026 03:17:44 +0000 (0:00:00.894) 0:00:21.832 ******* 2026-02-15 03:17:52.899184 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:17:52.899192 | orchestrator | changed: [testbed-manager] 2026-02-15 03:17:52.899200 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:17:52.899208 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:17:52.899216 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:17:52.899223 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:17:52.899231 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:17:52.899239 | orchestrator | 2026-02-15 03:17:52.899266 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-15 03:17:52.899275 | orchestrator | Sunday 15 February 2026 03:17:47 +0000 (0:00:02.628) 0:00:24.460 ******* 2026-02-15 03:17:52.899283 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 03:17:52.899293 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 03:17:52.899301 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 03:17:52.899309 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 03:17:52.899317 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 03:17:52.899326 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 03:17:52.899334 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 03:17:52.899342 | orchestrator | 2026-02-15 03:17:52.899350 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-15 03:17:52.899358 | orchestrator | Sunday 15 February 2026 03:17:49 +0000 (0:00:02.160) 0:00:26.620 ******* 2026-02-15 03:17:52.899366 | orchestrator | changed: [testbed-manager] 2026-02-15 03:17:52.899374 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:17:52.899402 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:17:52.899412 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:17:52.899421 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:17:52.899430 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:17:52.899439 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:17:52.899448 | orchestrator | 2026-02-15 03:17:52.899456 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-15 03:17:52.899466 | orchestrator | Sunday 15 February 2026 03:17:51 +0000 (0:00:01.945) 0:00:28.566 ******* 2026-02-15 03:17:52.899477 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:52.899507 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:52.899517 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:52.899528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:52.899577 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:52.899591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:52.899613 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:52.899637 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:52.899653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:52.899674 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:59.193785 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.193915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:59.193934 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.193948 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:59.193985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:59.193998 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.194009 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:59.194116 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:17:59.194130 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.194142 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.194154 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.194175 | orchestrator | 2026-02-15 03:17:59.194188 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-15 03:17:59.194202 | orchestrator | Sunday 15 February 2026 03:17:53 +0000 (0:00:01.758) 0:00:30.324 ******* 2026-02-15 03:17:59.194213 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 03:17:59.194224 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 03:17:59.194235 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 03:17:59.194246 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 03:17:59.194256 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 03:17:59.194267 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 03:17:59.194277 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 03:17:59.194288 | orchestrator | 2026-02-15 03:17:59.194299 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-15 03:17:59.194311 | orchestrator | Sunday 15 February 2026 03:17:55 +0000 (0:00:02.001) 0:00:32.325 ******* 2026-02-15 03:17:59.194323 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 03:17:59.194336 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 03:17:59.194348 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 03:17:59.194361 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 03:17:59.194391 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 03:17:59.194405 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 03:17:59.194417 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 03:17:59.194429 | orchestrator | 2026-02-15 03:17:59.194441 | orchestrator | TASK [common : Check common containers] **************************************** 2026-02-15 03:17:59.194454 | orchestrator | Sunday 15 February 2026 03:17:56 +0000 (0:00:01.814) 0:00:34.140 ******* 2026-02-15 03:17:59.194467 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:59.194490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:59.861196 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:59.861370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:59.861390 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:59.861402 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:59.861414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.861427 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 03:17:59.861438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.861469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.861580 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.861607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.861628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.861651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.861667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.861678 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:17:59.861704 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:19:26.474572 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:19:26.474724 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:19:26.474754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:19:26.474804 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:19:26.474823 | orchestrator | 2026-02-15 03:19:26.474842 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-02-15 03:19:26.474861 | orchestrator | Sunday 15 February 2026 03:17:59 +0000 (0:00:02.903) 0:00:37.044 ******* 2026-02-15 03:19:26.474878 | orchestrator | changed: [testbed-manager] 2026-02-15 03:19:26.474897 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:19:26.474914 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:19:26.474931 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:19:26.474948 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:19:26.474965 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:19:26.474982 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:19:26.475000 | orchestrator | 2026-02-15 03:19:26.475019 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-02-15 03:19:26.475038 | orchestrator | Sunday 15 February 2026 03:18:01 +0000 (0:00:01.458) 0:00:38.502 ******* 2026-02-15 03:19:26.475057 | orchestrator | changed: [testbed-manager] 2026-02-15 03:19:26.475076 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:19:26.475095 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:19:26.475113 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:19:26.475132 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:19:26.475151 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:19:26.475171 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:19:26.475190 | orchestrator | 2026-02-15 03:19:26.475209 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 03:19:26.475229 | orchestrator | Sunday 15 February 2026 03:18:02 +0000 (0:00:01.101) 0:00:39.604 ******* 2026-02-15 03:19:26.475249 | orchestrator | 2026-02-15 03:19:26.475269 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 03:19:26.475288 | orchestrator | Sunday 15 February 2026 03:18:02 +0000 (0:00:00.067) 0:00:39.671 ******* 2026-02-15 03:19:26.475302 | orchestrator | 2026-02-15 03:19:26.475315 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 03:19:26.475351 | orchestrator | Sunday 15 February 2026 03:18:02 +0000 (0:00:00.066) 0:00:39.737 ******* 2026-02-15 03:19:26.475364 | orchestrator | 2026-02-15 03:19:26.475381 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 03:19:26.475400 | orchestrator | Sunday 15 February 2026 03:18:02 +0000 (0:00:00.065) 0:00:39.803 ******* 2026-02-15 03:19:26.475418 | orchestrator | 2026-02-15 03:19:26.475438 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 03:19:26.475459 | orchestrator | Sunday 15 February 2026 03:18:02 +0000 (0:00:00.237) 0:00:40.040 ******* 2026-02-15 03:19:26.475479 | orchestrator | 2026-02-15 03:19:26.475492 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 03:19:26.475503 | orchestrator | Sunday 15 February 2026 03:18:02 +0000 (0:00:00.068) 0:00:40.109 ******* 2026-02-15 03:19:26.475514 | orchestrator | 2026-02-15 03:19:26.475525 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 03:19:26.475536 | orchestrator | Sunday 15 February 2026 03:18:02 +0000 (0:00:00.080) 0:00:40.189 ******* 2026-02-15 03:19:26.475546 | orchestrator | 2026-02-15 03:19:26.475557 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-15 03:19:26.475568 | orchestrator | Sunday 15 February 2026 03:18:03 +0000 (0:00:00.090) 0:00:40.280 ******* 2026-02-15 03:19:26.475578 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:19:26.475589 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:19:26.475600 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:19:26.475610 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:19:26.475621 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:19:26.475654 | orchestrator | changed: [testbed-manager] 2026-02-15 03:19:26.475666 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:19:26.475676 | orchestrator | 2026-02-15 03:19:26.475687 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-15 03:19:26.475698 | orchestrator | Sunday 15 February 2026 03:18:41 +0000 (0:00:38.184) 0:01:18.464 ******* 2026-02-15 03:19:26.475709 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:19:26.475719 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:19:26.475730 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:19:26.475741 | orchestrator | changed: [testbed-manager] 2026-02-15 03:19:26.475752 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:19:26.475804 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:19:26.475815 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:19:26.475826 | orchestrator | 2026-02-15 03:19:26.475838 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-15 03:19:26.475849 | orchestrator | Sunday 15 February 2026 03:19:16 +0000 (0:00:34.857) 0:01:53.321 ******* 2026-02-15 03:19:26.475860 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:19:26.475872 | orchestrator | ok: [testbed-manager] 2026-02-15 03:19:26.475883 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:19:26.475893 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:19:26.475904 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:19:26.475915 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:19:26.475925 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:19:26.475936 | orchestrator | 2026-02-15 03:19:26.475946 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-15 03:19:26.475957 | orchestrator | Sunday 15 February 2026 03:19:18 +0000 (0:00:02.007) 0:01:55.328 ******* 2026-02-15 03:19:26.475968 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:19:26.475979 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:19:26.475990 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:19:26.476000 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:19:26.476025 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:19:26.476046 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:19:26.476058 | orchestrator | changed: [testbed-manager] 2026-02-15 03:19:26.476069 | orchestrator | 2026-02-15 03:19:26.476079 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:19:26.476103 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 03:19:26.476116 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 03:19:26.476127 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 03:19:26.476148 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 03:19:26.476160 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 03:19:26.476171 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 03:19:26.476182 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 03:19:26.476193 | orchestrator | 2026-02-15 03:19:26.476204 | orchestrator | 2026-02-15 03:19:26.476215 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:19:26.476226 | orchestrator | Sunday 15 February 2026 03:19:26 +0000 (0:00:08.297) 0:02:03.626 ******* 2026-02-15 03:19:26.476237 | orchestrator | =============================================================================== 2026-02-15 03:19:26.476247 | orchestrator | common : Restart fluentd container ------------------------------------- 38.18s 2026-02-15 03:19:26.476258 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 34.86s 2026-02-15 03:19:26.476269 | orchestrator | common : Restart cron container ----------------------------------------- 8.30s 2026-02-15 03:19:26.476280 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 3.78s 2026-02-15 03:19:26.476291 | orchestrator | common : Copying over config.json files for services -------------------- 3.50s 2026-02-15 03:19:26.476301 | orchestrator | common : Check common containers ---------------------------------------- 2.90s 2026-02-15 03:19:26.476312 | orchestrator | common : Ensuring config directories exist ------------------------------ 2.76s 2026-02-15 03:19:26.476323 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 2.63s 2026-02-15 03:19:26.476334 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.16s 2026-02-15 03:19:26.476344 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.01s 2026-02-15 03:19:26.476355 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.00s 2026-02-15 03:19:26.476366 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 1.95s 2026-02-15 03:19:26.476377 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 1.89s 2026-02-15 03:19:26.476388 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 1.81s 2026-02-15 03:19:26.476399 | orchestrator | common : Ensuring config directories have correct owner and permission --- 1.76s 2026-02-15 03:19:26.476410 | orchestrator | common : include_tasks -------------------------------------------------- 1.49s 2026-02-15 03:19:26.476429 | orchestrator | common : Creating log volume -------------------------------------------- 1.46s 2026-02-15 03:19:26.925427 | orchestrator | common : include_tasks -------------------------------------------------- 1.37s 2026-02-15 03:19:26.925540 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.29s 2026-02-15 03:19:26.925561 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.10s 2026-02-15 03:19:29.549726 | orchestrator | 2026-02-15 03:19:29 | INFO  | Task d4179d5c-2b56-495c-80b3-771e0071d00d (loadbalancer) was prepared for execution. 2026-02-15 03:19:29.549921 | orchestrator | 2026-02-15 03:19:29 | INFO  | It takes a moment until task d4179d5c-2b56-495c-80b3-771e0071d00d (loadbalancer) has been started and output is visible here. 2026-02-15 03:19:43.983729 | orchestrator | 2026-02-15 03:19:43.983811 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 03:19:43.983818 | orchestrator | 2026-02-15 03:19:43.983824 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 03:19:43.983829 | orchestrator | Sunday 15 February 2026 03:19:34 +0000 (0:00:00.261) 0:00:00.261 ******* 2026-02-15 03:19:43.983833 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:19:43.983839 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:19:43.983843 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:19:43.983847 | orchestrator | 2026-02-15 03:19:43.983851 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 03:19:43.983856 | orchestrator | Sunday 15 February 2026 03:19:34 +0000 (0:00:00.320) 0:00:00.581 ******* 2026-02-15 03:19:43.983861 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-15 03:19:43.983865 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-15 03:19:43.983869 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-15 03:19:43.983873 | orchestrator | 2026-02-15 03:19:43.983877 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-15 03:19:43.983881 | orchestrator | 2026-02-15 03:19:43.983885 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-15 03:19:43.983889 | orchestrator | Sunday 15 February 2026 03:19:34 +0000 (0:00:00.478) 0:00:01.060 ******* 2026-02-15 03:19:43.983894 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:19:43.983898 | orchestrator | 2026-02-15 03:19:43.983902 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-15 03:19:43.983906 | orchestrator | Sunday 15 February 2026 03:19:35 +0000 (0:00:00.606) 0:00:01.666 ******* 2026-02-15 03:19:43.983910 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:19:43.983914 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:19:43.983918 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:19:43.983922 | orchestrator | 2026-02-15 03:19:43.983926 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-15 03:19:43.983930 | orchestrator | Sunday 15 February 2026 03:19:36 +0000 (0:00:00.609) 0:00:02.276 ******* 2026-02-15 03:19:43.983934 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:19:43.983938 | orchestrator | 2026-02-15 03:19:43.983942 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-15 03:19:43.983946 | orchestrator | Sunday 15 February 2026 03:19:36 +0000 (0:00:00.724) 0:00:03.001 ******* 2026-02-15 03:19:43.983950 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:19:43.983954 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:19:43.983958 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:19:43.983962 | orchestrator | 2026-02-15 03:19:43.983966 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-15 03:19:43.983970 | orchestrator | Sunday 15 February 2026 03:19:37 +0000 (0:00:00.616) 0:00:03.617 ******* 2026-02-15 03:19:43.983974 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-15 03:19:43.983978 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-15 03:19:43.983982 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-15 03:19:43.983986 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-15 03:19:43.983990 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-15 03:19:43.983994 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-15 03:19:43.984012 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-15 03:19:43.984016 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-15 03:19:43.984020 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-15 03:19:43.984024 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-15 03:19:43.984028 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-15 03:19:43.984032 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-15 03:19:43.984036 | orchestrator | 2026-02-15 03:19:43.984040 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-15 03:19:43.984044 | orchestrator | Sunday 15 February 2026 03:19:39 +0000 (0:00:02.160) 0:00:05.778 ******* 2026-02-15 03:19:43.984048 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-15 03:19:43.984053 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-15 03:19:43.984057 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-15 03:19:43.984061 | orchestrator | 2026-02-15 03:19:43.984065 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-15 03:19:43.984069 | orchestrator | Sunday 15 February 2026 03:19:40 +0000 (0:00:00.697) 0:00:06.475 ******* 2026-02-15 03:19:43.984073 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-02-15 03:19:43.984077 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-02-15 03:19:43.984081 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-02-15 03:19:43.984084 | orchestrator | 2026-02-15 03:19:43.984088 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-15 03:19:43.984092 | orchestrator | Sunday 15 February 2026 03:19:41 +0000 (0:00:01.333) 0:00:07.809 ******* 2026-02-15 03:19:43.984096 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-15 03:19:43.984110 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:19:43.984125 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-15 03:19:43.984129 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:19:43.984133 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-15 03:19:43.984137 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:19:43.984141 | orchestrator | 2026-02-15 03:19:43.984146 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-15 03:19:43.984150 | orchestrator | Sunday 15 February 2026 03:19:42 +0000 (0:00:00.520) 0:00:08.329 ******* 2026-02-15 03:19:43.984156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 03:19:43.984164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 03:19:43.984168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 03:19:43.984177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:19:43.984182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:19:43.984193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:19:49.217987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:19:49.218188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:19:49.219052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:19:49.219103 | orchestrator | 2026-02-15 03:19:49.219119 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-15 03:19:49.219132 | orchestrator | Sunday 15 February 2026 03:19:43 +0000 (0:00:01.880) 0:00:10.209 ******* 2026-02-15 03:19:49.219144 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:19:49.219157 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:19:49.219168 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:19:49.219179 | orchestrator | 2026-02-15 03:19:49.219190 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-15 03:19:49.219201 | orchestrator | Sunday 15 February 2026 03:19:44 +0000 (0:00:00.898) 0:00:11.107 ******* 2026-02-15 03:19:49.219212 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-02-15 03:19:49.219223 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-02-15 03:19:49.219234 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-02-15 03:19:49.219245 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-02-15 03:19:49.219255 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-02-15 03:19:49.219266 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-02-15 03:19:49.219277 | orchestrator | 2026-02-15 03:19:49.219288 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-15 03:19:49.219305 | orchestrator | Sunday 15 February 2026 03:19:46 +0000 (0:00:01.470) 0:00:12.578 ******* 2026-02-15 03:19:49.219330 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:19:49.219355 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:19:49.219372 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:19:49.219390 | orchestrator | 2026-02-15 03:19:49.219408 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-15 03:19:49.219426 | orchestrator | Sunday 15 February 2026 03:19:47 +0000 (0:00:00.896) 0:00:13.475 ******* 2026-02-15 03:19:49.219443 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:19:49.219459 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:19:49.219477 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:19:49.219496 | orchestrator | 2026-02-15 03:19:49.219514 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-15 03:19:49.219532 | orchestrator | Sunday 15 February 2026 03:19:48 +0000 (0:00:01.377) 0:00:14.852 ******* 2026-02-15 03:19:49.219554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 03:19:49.219632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:19:49.219655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:19:49.219690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 03:19:49.219702 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:19:49.219714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 03:19:49.219766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:19:49.219779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:19:49.219796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 03:19:49.219808 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:19:49.219828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 03:19:52.057450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:19:52.057558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:19:52.057629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 03:19:52.057643 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:19:52.057674 | orchestrator | 2026-02-15 03:19:52.057697 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-15 03:19:52.057710 | orchestrator | Sunday 15 February 2026 03:19:49 +0000 (0:00:00.593) 0:00:15.446 ******* 2026-02-15 03:19:52.057722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 03:19:52.057735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 03:19:52.057773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 03:19:52.057805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:19:52.057818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:19:52.057830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:19:52.057842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:19:52.057854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 03:19:52.057882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 03:19:52.057922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:20:00.774740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:00.774861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.6.20251130', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b', '__omit_place_holder__53b73f222e7b2aee0897d3b4ea63e13c1c0bd25b'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 03:20:00.774880 | orchestrator | 2026-02-15 03:20:00.774895 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-15 03:20:00.774908 | orchestrator | Sunday 15 February 2026 03:19:52 +0000 (0:00:02.841) 0:00:18.287 ******* 2026-02-15 03:20:00.774921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 03:20:00.774934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 03:20:00.774983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 03:20:00.774997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:20:00.775028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:20:00.775041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:20:00.775053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:20:00.775065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:20:00.775076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:20:00.775096 | orchestrator | 2026-02-15 03:20:00.775108 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-15 03:20:00.775119 | orchestrator | Sunday 15 February 2026 03:19:55 +0000 (0:00:03.121) 0:00:21.409 ******* 2026-02-15 03:20:00.775131 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-15 03:20:00.775143 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-15 03:20:00.775154 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-15 03:20:00.775165 | orchestrator | 2026-02-15 03:20:00.775176 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-15 03:20:00.775188 | orchestrator | Sunday 15 February 2026 03:19:57 +0000 (0:00:01.914) 0:00:23.324 ******* 2026-02-15 03:20:00.775199 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-15 03:20:00.775210 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-15 03:20:00.775221 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-15 03:20:00.775232 | orchestrator | 2026-02-15 03:20:00.775246 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-15 03:20:00.775259 | orchestrator | Sunday 15 February 2026 03:20:00 +0000 (0:00:03.062) 0:00:26.386 ******* 2026-02-15 03:20:00.775273 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:00.775288 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:00.775301 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:00.775313 | orchestrator | 2026-02-15 03:20:00.775334 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-15 03:20:12.599946 | orchestrator | Sunday 15 February 2026 03:20:00 +0000 (0:00:00.621) 0:00:27.008 ******* 2026-02-15 03:20:12.600067 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-15 03:20:12.600096 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-15 03:20:12.600108 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-15 03:20:12.600120 | orchestrator | 2026-02-15 03:20:12.600132 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-15 03:20:12.600144 | orchestrator | Sunday 15 February 2026 03:20:02 +0000 (0:00:02.171) 0:00:29.180 ******* 2026-02-15 03:20:12.600156 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-15 03:20:12.600167 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-15 03:20:12.600178 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-15 03:20:12.600189 | orchestrator | 2026-02-15 03:20:12.600201 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-15 03:20:12.600212 | orchestrator | Sunday 15 February 2026 03:20:05 +0000 (0:00:02.224) 0:00:31.405 ******* 2026-02-15 03:20:12.600223 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-02-15 03:20:12.600234 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-02-15 03:20:12.600245 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-02-15 03:20:12.600256 | orchestrator | 2026-02-15 03:20:12.600267 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-15 03:20:12.600309 | orchestrator | Sunday 15 February 2026 03:20:06 +0000 (0:00:01.477) 0:00:32.883 ******* 2026-02-15 03:20:12.600322 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-15 03:20:12.600333 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-15 03:20:12.600344 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-15 03:20:12.600355 | orchestrator | 2026-02-15 03:20:12.600366 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-15 03:20:12.600377 | orchestrator | Sunday 15 February 2026 03:20:08 +0000 (0:00:01.427) 0:00:34.310 ******* 2026-02-15 03:20:12.600389 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:20:12.600400 | orchestrator | 2026-02-15 03:20:12.600410 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-02-15 03:20:12.600453 | orchestrator | Sunday 15 February 2026 03:20:08 +0000 (0:00:00.546) 0:00:34.856 ******* 2026-02-15 03:20:12.600468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 03:20:12.600488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 03:20:12.600501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 03:20:12.600531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:20:12.600544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:20:12.600564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:20:12.600576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:20:12.600593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:20:12.600618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:20:12.600638 | orchestrator | 2026-02-15 03:20:12.600656 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-02-15 03:20:12.600673 | orchestrator | Sunday 15 February 2026 03:20:11 +0000 (0:00:03.364) 0:00:38.221 ******* 2026-02-15 03:20:12.600704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 03:20:13.531849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:13.531940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:13.531947 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:13.531953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 03:20:13.531958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:13.531971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:13.531976 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:13.531980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 03:20:13.531997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:13.532005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:13.532009 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:13.532013 | orchestrator | 2026-02-15 03:20:13.532018 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-02-15 03:20:13.532023 | orchestrator | Sunday 15 February 2026 03:20:12 +0000 (0:00:00.614) 0:00:38.835 ******* 2026-02-15 03:20:13.532028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 03:20:13.532032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:13.532039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:13.532044 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:13.532048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 03:20:13.532055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:14.429045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:14.429171 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:14.429198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 03:20:14.429220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:14.429237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:14.429253 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:14.429271 | orchestrator | 2026-02-15 03:20:14.429290 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-15 03:20:14.429309 | orchestrator | Sunday 15 February 2026 03:20:13 +0000 (0:00:00.932) 0:00:39.767 ******* 2026-02-15 03:20:14.429350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 03:20:14.429371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:14.429606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:14.429635 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:14.429656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 03:20:14.429676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:14.429696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:14.429716 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:14.429737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 03:20:14.429774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:14.429808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:14.429840 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:15.893087 | orchestrator | 2026-02-15 03:20:15.893183 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-15 03:20:15.893198 | orchestrator | Sunday 15 February 2026 03:20:14 +0000 (0:00:00.889) 0:00:40.656 ******* 2026-02-15 03:20:15.893213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 03:20:15.893227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:15.893239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:15.893249 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:15.893261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 03:20:15.893289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:15.893323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:15.893333 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:15.893360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 03:20:15.893372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:15.893382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:15.893439 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:15.893451 | orchestrator | 2026-02-15 03:20:15.893462 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-15 03:20:15.893472 | orchestrator | Sunday 15 February 2026 03:20:15 +0000 (0:00:00.624) 0:00:41.281 ******* 2026-02-15 03:20:15.893482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 03:20:15.893497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:15.893521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:15.893531 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:15.893550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 03:20:16.977347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:16.977489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:16.977503 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:16.977513 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 03:20:16.977538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:16.977567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:16.977577 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:16.977585 | orchestrator | 2026-02-15 03:20:16.977595 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-02-15 03:20:16.977604 | orchestrator | Sunday 15 February 2026 03:20:15 +0000 (0:00:00.845) 0:00:42.126 ******* 2026-02-15 03:20:16.977613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 03:20:16.977637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:16.977645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:16.977653 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:16.977661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 03:20:16.977668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:16.977686 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:16.977694 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:16.977701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 03:20:16.977715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:18.424455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:18.424549 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:18.424564 | orchestrator | 2026-02-15 03:20:18.424576 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-02-15 03:20:18.424589 | orchestrator | Sunday 15 February 2026 03:20:16 +0000 (0:00:01.076) 0:00:43.203 ******* 2026-02-15 03:20:18.424603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 03:20:18.424612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:18.424647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:18.424655 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:18.424662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 03:20:18.424669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:18.424689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:18.424696 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:18.424703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 03:20:18.424710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:18.424721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:18.424728 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:18.424734 | orchestrator | 2026-02-15 03:20:18.424741 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-02-15 03:20:18.424747 | orchestrator | Sunday 15 February 2026 03:20:17 +0000 (0:00:00.588) 0:00:43.791 ******* 2026-02-15 03:20:18.424754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 03:20:18.424761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:18.424777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:25.011940 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:25.012068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 03:20:25.012097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:25.012146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:25.012160 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:25.012185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 03:20:25.012196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 03:20:25.012207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 03:20:25.012218 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:25.012228 | orchestrator | 2026-02-15 03:20:25.012239 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-15 03:20:25.012251 | orchestrator | Sunday 15 February 2026 03:20:18 +0000 (0:00:00.863) 0:00:44.655 ******* 2026-02-15 03:20:25.012261 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-15 03:20:25.012289 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-15 03:20:25.012301 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-15 03:20:25.012311 | orchestrator | 2026-02-15 03:20:25.012321 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-15 03:20:25.012371 | orchestrator | Sunday 15 February 2026 03:20:20 +0000 (0:00:01.765) 0:00:46.421 ******* 2026-02-15 03:20:25.012392 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-15 03:20:25.012403 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-15 03:20:25.012412 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-15 03:20:25.012422 | orchestrator | 2026-02-15 03:20:25.012432 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-15 03:20:25.012442 | orchestrator | Sunday 15 February 2026 03:20:21 +0000 (0:00:01.708) 0:00:48.129 ******* 2026-02-15 03:20:25.012451 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-15 03:20:25.012461 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-15 03:20:25.012470 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-15 03:20:25.012482 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-15 03:20:25.012494 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:25.012505 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-15 03:20:25.012516 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:25.012528 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-15 03:20:25.012539 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:25.012550 | orchestrator | 2026-02-15 03:20:25.012562 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-02-15 03:20:25.012573 | orchestrator | Sunday 15 February 2026 03:20:22 +0000 (0:00:00.821) 0:00:48.950 ******* 2026-02-15 03:20:25.012591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 03:20:25.012605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 03:20:25.012617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.8.15.20251130', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 03:20:25.012638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:20:29.557735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:20:29.557864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:3.0.3.20251130', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 03:20:29.557882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:20:29.557914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:20:29.557966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.8.20251130', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 03:20:29.557980 | orchestrator | 2026-02-15 03:20:29.557996 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-15 03:20:29.558009 | orchestrator | Sunday 15 February 2026 03:20:24 +0000 (0:00:02.294) 0:00:51.245 ******* 2026-02-15 03:20:29.558066 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:20:29.558078 | orchestrator | 2026-02-15 03:20:29.558090 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-15 03:20:29.558122 | orchestrator | Sunday 15 February 2026 03:20:25 +0000 (0:00:00.940) 0:00:52.186 ******* 2026-02-15 03:20:29.558156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 03:20:29.558170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 03:20:29.558183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:29.558195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 03:20:29.558213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 03:20:29.558225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 03:20:29.558249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:29.558272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 03:20:30.216877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 03:20:30.216998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 03:20:30.217040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:30.217056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 03:20:30.217096 | orchestrator | 2026-02-15 03:20:30.217113 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-15 03:20:30.217127 | orchestrator | Sunday 15 February 2026 03:20:29 +0000 (0:00:03.595) 0:00:55.781 ******* 2026-02-15 03:20:30.217140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 03:20:30.217173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 03:20:30.217189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:30.217201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 03:20:30.217213 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:30.217232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 03:20:30.217256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 03:20:30.217270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:30.217282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 03:20:30.217294 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:30.217347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 03:20:38.976442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 03:20:38.976541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:38.976574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 03:20:38.976583 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:38.976592 | orchestrator | 2026-02-15 03:20:38.976601 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-15 03:20:38.976610 | orchestrator | Sunday 15 February 2026 03:20:30 +0000 (0:00:00.667) 0:00:56.449 ******* 2026-02-15 03:20:38.976617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-15 03:20:38.976627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-15 03:20:38.976636 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:38.976657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-15 03:20:38.976663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-15 03:20:38.976670 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:38.976677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-02-15 03:20:38.976683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-02-15 03:20:38.976690 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:38.976696 | orchestrator | 2026-02-15 03:20:38.976703 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-15 03:20:38.976709 | orchestrator | Sunday 15 February 2026 03:20:31 +0000 (0:00:01.157) 0:00:57.606 ******* 2026-02-15 03:20:38.976716 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:20:38.976723 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:20:38.976729 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:20:38.976736 | orchestrator | 2026-02-15 03:20:38.976743 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-15 03:20:38.976749 | orchestrator | Sunday 15 February 2026 03:20:32 +0000 (0:00:01.311) 0:00:58.917 ******* 2026-02-15 03:20:38.976756 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:20:38.976762 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:20:38.976769 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:20:38.976776 | orchestrator | 2026-02-15 03:20:38.976782 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-15 03:20:38.976789 | orchestrator | Sunday 15 February 2026 03:20:34 +0000 (0:00:02.124) 0:01:01.042 ******* 2026-02-15 03:20:38.976796 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:20:38.976803 | orchestrator | 2026-02-15 03:20:38.976823 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-15 03:20:38.976830 | orchestrator | Sunday 15 February 2026 03:20:35 +0000 (0:00:00.657) 0:01:01.699 ******* 2026-02-15 03:20:38.976848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 03:20:38.976857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:38.976866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:20:38.976874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 03:20:38.976881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:38.976895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:20:39.612676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 03:20:39.612781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:39.612803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:20:39.612820 | orchestrator | 2026-02-15 03:20:39.612837 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-15 03:20:39.612855 | orchestrator | Sunday 15 February 2026 03:20:38 +0000 (0:00:03.509) 0:01:05.209 ******* 2026-02-15 03:20:39.612872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 03:20:39.612889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:39.612964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:20:39.612986 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:39.613005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 03:20:39.613022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:39.613041 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:20:39.613051 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:39.613061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 03:20:39.613094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 03:20:49.669055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:20:49.669205 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:49.669225 | orchestrator | 2026-02-15 03:20:49.669238 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-15 03:20:49.669252 | orchestrator | Sunday 15 February 2026 03:20:39 +0000 (0:00:00.634) 0:01:05.844 ******* 2026-02-15 03:20:49.669264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-15 03:20:49.669278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-15 03:20:49.669291 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:49.669302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-15 03:20:49.669314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-15 03:20:49.669325 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:49.669336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-15 03:20:49.669348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-02-15 03:20:49.669359 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:49.669370 | orchestrator | 2026-02-15 03:20:49.669382 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-15 03:20:49.669422 | orchestrator | Sunday 15 February 2026 03:20:40 +0000 (0:00:00.943) 0:01:06.787 ******* 2026-02-15 03:20:49.669434 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:20:49.669445 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:20:49.669456 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:20:49.669466 | orchestrator | 2026-02-15 03:20:49.669478 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-15 03:20:49.669489 | orchestrator | Sunday 15 February 2026 03:20:42 +0000 (0:00:01.573) 0:01:08.361 ******* 2026-02-15 03:20:49.669500 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:20:49.669510 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:20:49.669521 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:20:49.669532 | orchestrator | 2026-02-15 03:20:49.669543 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-15 03:20:49.669554 | orchestrator | Sunday 15 February 2026 03:20:44 +0000 (0:00:02.147) 0:01:10.508 ******* 2026-02-15 03:20:49.669565 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:49.669576 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:49.669587 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:49.669599 | orchestrator | 2026-02-15 03:20:49.669612 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-15 03:20:49.669624 | orchestrator | Sunday 15 February 2026 03:20:44 +0000 (0:00:00.325) 0:01:10.833 ******* 2026-02-15 03:20:49.669638 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:20:49.669650 | orchestrator | 2026-02-15 03:20:49.669662 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-15 03:20:49.669675 | orchestrator | Sunday 15 February 2026 03:20:45 +0000 (0:00:00.745) 0:01:11.579 ******* 2026-02-15 03:20:49.669724 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-15 03:20:49.669740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-15 03:20:49.669754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-15 03:20:49.669775 | orchestrator | 2026-02-15 03:20:49.669789 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-15 03:20:49.669803 | orchestrator | Sunday 15 February 2026 03:20:48 +0000 (0:00:02.839) 0:01:14.418 ******* 2026-02-15 03:20:49.669816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-15 03:20:49.669829 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:49.669842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-15 03:20:49.669854 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:49.669879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-15 03:20:57.643713 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:57.643836 | orchestrator | 2026-02-15 03:20:57.643853 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-15 03:20:57.643866 | orchestrator | Sunday 15 February 2026 03:20:49 +0000 (0:00:01.483) 0:01:15.902 ******* 2026-02-15 03:20:57.643884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 03:20:57.643906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 03:20:57.643948 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:57.643968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 03:20:57.643987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 03:20:57.644003 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:57.644020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 03:20:57.644032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 03:20:57.644042 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:57.644052 | orchestrator | 2026-02-15 03:20:57.644062 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-15 03:20:57.644072 | orchestrator | Sunday 15 February 2026 03:20:51 +0000 (0:00:01.753) 0:01:17.655 ******* 2026-02-15 03:20:57.644081 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:57.644096 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:57.644105 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:57.644115 | orchestrator | 2026-02-15 03:20:57.644154 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-15 03:20:57.644164 | orchestrator | Sunday 15 February 2026 03:20:51 +0000 (0:00:00.426) 0:01:18.082 ******* 2026-02-15 03:20:57.644174 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:57.644183 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:20:57.644193 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:20:57.644203 | orchestrator | 2026-02-15 03:20:57.644227 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-15 03:20:57.644237 | orchestrator | Sunday 15 February 2026 03:20:53 +0000 (0:00:01.354) 0:01:19.437 ******* 2026-02-15 03:20:57.644247 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:20:57.644256 | orchestrator | 2026-02-15 03:20:57.644266 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-15 03:20:57.644276 | orchestrator | Sunday 15 February 2026 03:20:54 +0000 (0:00:01.038) 0:01:20.475 ******* 2026-02-15 03:20:57.644308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 03:20:57.644332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:20:57.644345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 03:20:57.644356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 03:20:57.644367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 03:20:57.644385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:20:58.392257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 03:20:58.392393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 03:20:58.392424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 03:20:58.392448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:20:58.392478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 03:20:58.392545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 03:20:58.392560 | orchestrator | 2026-02-15 03:20:58.392575 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-15 03:20:58.392587 | orchestrator | Sunday 15 February 2026 03:20:57 +0000 (0:00:03.490) 0:01:23.966 ******* 2026-02-15 03:20:58.392600 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 03:20:58.392612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:20:58.392624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 03:20:58.392641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 03:20:58.392660 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:20:58.392685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 03:21:05.094736 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 03:21:05.094835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:21:05.094850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:21:05.094861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 03:21:05.094909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 03:21:05.094935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 03:21:05.094946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 03:21:05.094956 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:05.094968 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:05.094977 | orchestrator | 2026-02-15 03:21:05.094988 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-15 03:21:05.094998 | orchestrator | Sunday 15 February 2026 03:20:58 +0000 (0:00:00.780) 0:01:24.746 ******* 2026-02-15 03:21:05.095008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-15 03:21:05.095020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-15 03:21:05.095030 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:05.095039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-15 03:21:05.095048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-15 03:21:05.095057 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:05.095066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-15 03:21:05.095145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-02-15 03:21:05.095164 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:05.095173 | orchestrator | 2026-02-15 03:21:05.095182 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-15 03:21:05.095191 | orchestrator | Sunday 15 February 2026 03:20:59 +0000 (0:00:01.426) 0:01:26.173 ******* 2026-02-15 03:21:05.095200 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:21:05.095209 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:21:05.095218 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:21:05.095226 | orchestrator | 2026-02-15 03:21:05.095236 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-15 03:21:05.095245 | orchestrator | Sunday 15 February 2026 03:21:01 +0000 (0:00:01.344) 0:01:27.518 ******* 2026-02-15 03:21:05.095254 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:21:05.095264 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:21:05.095280 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:21:05.095291 | orchestrator | 2026-02-15 03:21:05.095302 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-15 03:21:05.095313 | orchestrator | Sunday 15 February 2026 03:21:03 +0000 (0:00:02.068) 0:01:29.586 ******* 2026-02-15 03:21:05.095323 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:05.095334 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:05.095344 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:05.095355 | orchestrator | 2026-02-15 03:21:05.095365 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-15 03:21:05.095376 | orchestrator | Sunday 15 February 2026 03:21:03 +0000 (0:00:00.308) 0:01:29.895 ******* 2026-02-15 03:21:05.095387 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:05.095396 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:05.095407 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:05.095417 | orchestrator | 2026-02-15 03:21:05.095428 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-15 03:21:05.095439 | orchestrator | Sunday 15 February 2026 03:21:03 +0000 (0:00:00.345) 0:01:30.240 ******* 2026-02-15 03:21:05.095449 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:21:05.095459 | orchestrator | 2026-02-15 03:21:05.095470 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-15 03:21:05.095480 | orchestrator | Sunday 15 February 2026 03:21:05 +0000 (0:00:01.086) 0:01:31.327 ******* 2026-02-15 03:21:09.625259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 03:21:09.625401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 03:21:09.625468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 03:21:09.625487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 03:21:09.625516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 03:21:09.625529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:21:09.625563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 03:21:09.625576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 03:21:09.625597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 03:21:09.625609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 03:21:09.625627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 03:21:09.625647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 03:21:10.495929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 03:21:10.496079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 03:21:10.496140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:21:10.496162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 03:21:10.496191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 03:21:10.496204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 03:21:10.496235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 03:21:10.496248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:21:10.496260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 03:21:10.496281 | orchestrator | 2026-02-15 03:21:10.496295 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-15 03:21:10.496308 | orchestrator | Sunday 15 February 2026 03:21:09 +0000 (0:00:04.759) 0:01:36.086 ******* 2026-02-15 03:21:10.496324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 03:21:10.496356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 03:21:10.496384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 03:21:10.496415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 03:21:11.132254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 03:21:11.132352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:21:11.132361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 03:21:11.132368 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:11.132377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 03:21:11.132385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 03:21:11.132742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 03:21:11.132768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 03:21:11.132783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 03:21:11.132794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:21:11.132801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 03:21:11.132808 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:11.132816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 03:21:11.132824 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 03:21:11.132836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 03:21:22.182245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 03:21:22.182345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 03:21:22.182355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:21:22.182363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 03:21:22.182371 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:22.182379 | orchestrator | 2026-02-15 03:21:22.182386 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-15 03:21:22.182394 | orchestrator | Sunday 15 February 2026 03:21:11 +0000 (0:00:01.283) 0:01:37.369 ******* 2026-02-15 03:21:22.182400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-15 03:21:22.182409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-15 03:21:22.182416 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:22.182422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-15 03:21:22.182447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-15 03:21:22.182453 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:22.182459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-02-15 03:21:22.182465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-02-15 03:21:22.182471 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:22.182477 | orchestrator | 2026-02-15 03:21:22.182483 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-15 03:21:22.182500 | orchestrator | Sunday 15 February 2026 03:21:12 +0000 (0:00:01.422) 0:01:38.791 ******* 2026-02-15 03:21:22.182506 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:21:22.182512 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:21:22.182518 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:21:22.182524 | orchestrator | 2026-02-15 03:21:22.182530 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-15 03:21:22.182536 | orchestrator | Sunday 15 February 2026 03:21:13 +0000 (0:00:01.319) 0:01:40.111 ******* 2026-02-15 03:21:22.182541 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:21:22.182547 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:21:22.182553 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:21:22.182559 | orchestrator | 2026-02-15 03:21:22.182565 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-15 03:21:22.182570 | orchestrator | Sunday 15 February 2026 03:21:15 +0000 (0:00:02.131) 0:01:42.243 ******* 2026-02-15 03:21:22.182576 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:22.182582 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:22.182588 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:22.182594 | orchestrator | 2026-02-15 03:21:22.182600 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-15 03:21:22.182605 | orchestrator | Sunday 15 February 2026 03:21:16 +0000 (0:00:00.313) 0:01:42.556 ******* 2026-02-15 03:21:22.182611 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:21:22.182617 | orchestrator | 2026-02-15 03:21:22.182627 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-15 03:21:22.182633 | orchestrator | Sunday 15 February 2026 03:21:17 +0000 (0:00:01.081) 0:01:43.638 ******* 2026-02-15 03:21:22.182642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 03:21:22.182660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 03:21:25.781570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 03:21:25.781682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 03:21:25.781730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 03:21:25.781743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 03:21:25.781758 | orchestrator | 2026-02-15 03:21:25.781767 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-15 03:21:25.781776 | orchestrator | Sunday 15 February 2026 03:21:22 +0000 (0:00:04.894) 0:01:48.533 ******* 2026-02-15 03:21:25.781796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 03:21:25.891669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 03:21:25.891780 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:25.891806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 03:21:25.891831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 03:21:25.891845 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:25.891853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 03:21:25.891869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20251130', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 03:21:38.534657 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:38.534789 | orchestrator | 2026-02-15 03:21:38.534815 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-15 03:21:38.534832 | orchestrator | Sunday 15 February 2026 03:21:25 +0000 (0:00:03.592) 0:01:52.125 ******* 2026-02-15 03:21:38.534852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 03:21:38.534948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 03:21:38.534972 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:38.534991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 03:21:38.535010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 03:21:38.535027 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:38.535059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 03:21:38.535070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 03:21:38.535102 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:38.535112 | orchestrator | 2026-02-15 03:21:38.535123 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-15 03:21:38.535134 | orchestrator | Sunday 15 February 2026 03:21:29 +0000 (0:00:04.062) 0:01:56.188 ******* 2026-02-15 03:21:38.535144 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:21:38.535153 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:21:38.535163 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:21:38.535174 | orchestrator | 2026-02-15 03:21:38.535186 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-15 03:21:38.535198 | orchestrator | Sunday 15 February 2026 03:21:31 +0000 (0:00:01.358) 0:01:57.546 ******* 2026-02-15 03:21:38.535209 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:21:38.535221 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:21:38.535233 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:21:38.535244 | orchestrator | 2026-02-15 03:21:38.535255 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-15 03:21:38.535287 | orchestrator | Sunday 15 February 2026 03:21:33 +0000 (0:00:02.202) 0:01:59.749 ******* 2026-02-15 03:21:38.535299 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:38.535309 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:38.535320 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:38.535331 | orchestrator | 2026-02-15 03:21:38.535343 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-15 03:21:38.535354 | orchestrator | Sunday 15 February 2026 03:21:33 +0000 (0:00:00.314) 0:02:00.063 ******* 2026-02-15 03:21:38.535365 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:21:38.535376 | orchestrator | 2026-02-15 03:21:38.535387 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-15 03:21:38.535398 | orchestrator | Sunday 15 February 2026 03:21:34 +0000 (0:00:01.095) 0:02:01.159 ******* 2026-02-15 03:21:38.535410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 03:21:38.535423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 03:21:38.535436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 03:21:38.535484 | orchestrator | 2026-02-15 03:21:38.535497 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-15 03:21:38.535510 | orchestrator | Sunday 15 February 2026 03:21:38 +0000 (0:00:03.390) 0:02:04.550 ******* 2026-02-15 03:21:38.535521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-15 03:21:38.535541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-15 03:21:47.826444 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:47.826559 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:47.826578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-15 03:21:47.826594 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:47.826607 | orchestrator | 2026-02-15 03:21:47.826700 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-15 03:21:47.826721 | orchestrator | Sunday 15 February 2026 03:21:38 +0000 (0:00:00.425) 0:02:04.975 ******* 2026-02-15 03:21:47.826733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-15 03:21:47.826746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-15 03:21:47.826758 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:47.826769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-15 03:21:47.826780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-15 03:21:47.826810 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:47.826878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-02-15 03:21:47.826891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-02-15 03:21:47.826902 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:47.826913 | orchestrator | 2026-02-15 03:21:47.826924 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-15 03:21:47.826935 | orchestrator | Sunday 15 February 2026 03:21:39 +0000 (0:00:00.913) 0:02:05.889 ******* 2026-02-15 03:21:47.826946 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:21:47.826959 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:21:47.826976 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:21:47.826989 | orchestrator | 2026-02-15 03:21:47.827002 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-15 03:21:47.827015 | orchestrator | Sunday 15 February 2026 03:21:40 +0000 (0:00:01.320) 0:02:07.209 ******* 2026-02-15 03:21:47.827027 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:21:47.827040 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:21:47.827052 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:21:47.827065 | orchestrator | 2026-02-15 03:21:47.827078 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-15 03:21:47.827090 | orchestrator | Sunday 15 February 2026 03:21:43 +0000 (0:00:02.103) 0:02:09.313 ******* 2026-02-15 03:21:47.827116 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:47.827140 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:47.827154 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:47.827166 | orchestrator | 2026-02-15 03:21:47.827178 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-15 03:21:47.827192 | orchestrator | Sunday 15 February 2026 03:21:43 +0000 (0:00:00.334) 0:02:09.647 ******* 2026-02-15 03:21:47.827204 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:21:47.827216 | orchestrator | 2026-02-15 03:21:47.827229 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-15 03:21:47.827241 | orchestrator | Sunday 15 February 2026 03:21:44 +0000 (0:00:01.172) 0:02:10.820 ******* 2026-02-15 03:21:47.827280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 03:21:47.827313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 03:21:47.827338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 03:21:49.549088 | orchestrator | 2026-02-15 03:21:49.549218 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-15 03:21:49.549247 | orchestrator | Sunday 15 February 2026 03:21:47 +0000 (0:00:03.243) 0:02:14.063 ******* 2026-02-15 03:21:49.549299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 03:21:49.549327 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:49.549377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 03:21:49.549433 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:49.549465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 03:21:49.549486 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:49.549506 | orchestrator | 2026-02-15 03:21:49.549539 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-15 03:21:49.549560 | orchestrator | Sunday 15 February 2026 03:21:48 +0000 (0:00:00.687) 0:02:14.751 ******* 2026-02-15 03:21:49.549582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-15 03:21:49.549604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 03:21:49.549626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-15 03:21:49.549662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 03:21:58.700872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-15 03:21:58.700975 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:58.700991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-15 03:21:58.701019 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 03:21:58.701032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-15 03:21:58.701043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 03:21:58.701053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-15 03:21:58.701063 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:58.701072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-15 03:21:58.701082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 03:21:58.701114 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-02-15 03:21:58.701124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 03:21:58.701133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-15 03:21:58.701142 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:58.701151 | orchestrator | 2026-02-15 03:21:58.701162 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-15 03:21:58.701173 | orchestrator | Sunday 15 February 2026 03:21:49 +0000 (0:00:01.032) 0:02:15.784 ******* 2026-02-15 03:21:58.701182 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:21:58.701191 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:21:58.701200 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:21:58.701209 | orchestrator | 2026-02-15 03:21:58.701218 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-15 03:21:58.701227 | orchestrator | Sunday 15 February 2026 03:21:51 +0000 (0:00:01.619) 0:02:17.403 ******* 2026-02-15 03:21:58.701236 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:21:58.701245 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:21:58.701254 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:21:58.701262 | orchestrator | 2026-02-15 03:21:58.701271 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-15 03:21:58.701280 | orchestrator | Sunday 15 February 2026 03:21:53 +0000 (0:00:02.160) 0:02:19.564 ******* 2026-02-15 03:21:58.701289 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:58.701298 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:58.701322 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:58.701332 | orchestrator | 2026-02-15 03:21:58.701341 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-15 03:21:58.701350 | orchestrator | Sunday 15 February 2026 03:21:53 +0000 (0:00:00.321) 0:02:19.886 ******* 2026-02-15 03:21:58.701359 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:58.701367 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:58.701376 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:21:58.701385 | orchestrator | 2026-02-15 03:21:58.701394 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-15 03:21:58.701403 | orchestrator | Sunday 15 February 2026 03:21:53 +0000 (0:00:00.352) 0:02:20.238 ******* 2026-02-15 03:21:58.701411 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:21:58.701420 | orchestrator | 2026-02-15 03:21:58.701433 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-15 03:21:58.701442 | orchestrator | Sunday 15 February 2026 03:21:55 +0000 (0:00:01.276) 0:02:21.515 ******* 2026-02-15 03:21:58.701455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 03:21:58.701477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 03:21:58.701488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 03:21:58.701498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 03:21:58.701520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 03:21:59.427843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 03:21:59.427955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 03:21:59.427969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 03:21:59.427979 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 03:21:59.427988 | orchestrator | 2026-02-15 03:21:59.427999 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-15 03:21:59.428008 | orchestrator | Sunday 15 February 2026 03:21:58 +0000 (0:00:03.420) 0:02:24.935 ******* 2026-02-15 03:21:59.428037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 03:21:59.428048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 03:21:59.428067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 03:21:59.428076 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:21:59.428086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 03:21:59.428095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 03:21:59.428104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 03:21:59.428112 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:21:59.428131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 03:22:09.487034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 03:22:09.487170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 03:22:09.487233 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:09.487251 | orchestrator | 2026-02-15 03:22:09.487264 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-15 03:22:09.487278 | orchestrator | Sunday 15 February 2026 03:21:59 +0000 (0:00:00.716) 0:02:25.652 ******* 2026-02-15 03:22:09.487292 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-15 03:22:09.487307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-15 03:22:09.487320 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:09.487332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-15 03:22:09.487344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-15 03:22:09.487355 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:09.487367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-15 03:22:09.487379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-02-15 03:22:09.487416 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:09.487428 | orchestrator | 2026-02-15 03:22:09.487440 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-15 03:22:09.487451 | orchestrator | Sunday 15 February 2026 03:22:00 +0000 (0:00:01.121) 0:02:26.773 ******* 2026-02-15 03:22:09.487462 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:22:09.487473 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:22:09.487484 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:22:09.487495 | orchestrator | 2026-02-15 03:22:09.487521 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-15 03:22:09.487535 | orchestrator | Sunday 15 February 2026 03:22:01 +0000 (0:00:01.356) 0:02:28.130 ******* 2026-02-15 03:22:09.487567 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:22:09.487580 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:22:09.487605 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:22:09.487618 | orchestrator | 2026-02-15 03:22:09.487630 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-15 03:22:09.487642 | orchestrator | Sunday 15 February 2026 03:22:03 +0000 (0:00:02.113) 0:02:30.243 ******* 2026-02-15 03:22:09.487653 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:09.487664 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:09.487675 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:09.487686 | orchestrator | 2026-02-15 03:22:09.487697 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-15 03:22:09.487764 | orchestrator | Sunday 15 February 2026 03:22:04 +0000 (0:00:00.341) 0:02:30.585 ******* 2026-02-15 03:22:09.487784 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:22:09.487803 | orchestrator | 2026-02-15 03:22:09.487822 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-15 03:22:09.487840 | orchestrator | Sunday 15 February 2026 03:22:05 +0000 (0:00:01.330) 0:02:31.915 ******* 2026-02-15 03:22:09.487861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 03:22:09.487884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:22:09.487905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 03:22:09.487942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:22:09.487976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 03:22:14.904225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:22:14.904341 | orchestrator | 2026-02-15 03:22:14.904359 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-15 03:22:14.904374 | orchestrator | Sunday 15 February 2026 03:22:09 +0000 (0:00:03.801) 0:02:35.717 ******* 2026-02-15 03:22:14.904387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 03:22:14.904508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:22:14.904531 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:14.904550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 03:22:14.904585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:22:14.904598 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:14.904609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 03:22:14.904621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:22:14.904642 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:14.904654 | orchestrator | 2026-02-15 03:22:14.904665 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-15 03:22:14.904749 | orchestrator | Sunday 15 February 2026 03:22:10 +0000 (0:00:00.709) 0:02:36.427 ******* 2026-02-15 03:22:14.904772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-15 03:22:14.904794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-15 03:22:14.904815 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:14.904836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-15 03:22:14.904856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-15 03:22:14.904877 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:14.904898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-02-15 03:22:14.904912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-02-15 03:22:14.904925 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:14.904938 | orchestrator | 2026-02-15 03:22:14.904951 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-15 03:22:14.904964 | orchestrator | Sunday 15 February 2026 03:22:11 +0000 (0:00:00.930) 0:02:37.357 ******* 2026-02-15 03:22:14.904977 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:22:14.904990 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:22:14.905003 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:22:14.905015 | orchestrator | 2026-02-15 03:22:14.905028 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-15 03:22:14.905040 | orchestrator | Sunday 15 February 2026 03:22:12 +0000 (0:00:01.664) 0:02:39.021 ******* 2026-02-15 03:22:14.905052 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:22:14.905065 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:22:14.905078 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:22:14.905091 | orchestrator | 2026-02-15 03:22:14.905103 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-15 03:22:14.905125 | orchestrator | Sunday 15 February 2026 03:22:14 +0000 (0:00:02.112) 0:02:41.134 ******* 2026-02-15 03:22:19.508593 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:22:19.508753 | orchestrator | 2026-02-15 03:22:19.508771 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-15 03:22:19.508783 | orchestrator | Sunday 15 February 2026 03:22:15 +0000 (0:00:01.065) 0:02:42.199 ******* 2026-02-15 03:22:19.508798 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 03:22:19.508839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:22:19.508853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 03:22:19.508867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 03:22:19.508893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 03:22:19.508927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:22:19.508947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 03:22:19.508959 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 03:22:19.508971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 03:22:19.508988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:22:19.509000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 03:22:19.509019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 03:22:20.564934 | orchestrator | 2026-02-15 03:22:20.565069 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-15 03:22:20.565094 | orchestrator | Sunday 15 February 2026 03:22:19 +0000 (0:00:03.637) 0:02:45.837 ******* 2026-02-15 03:22:20.565117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 03:22:20.565141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:22:20.565162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 03:22:20.565205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 03:22:20.565225 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:20.565247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 03:22:20.565324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:22:20.565347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 03:22:20.565360 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 03:22:20.565372 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:20.565383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 03:22:20.565402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:22:20.565414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 03:22:20.565442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 03:22:32.215493 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:32.215784 | orchestrator | 2026-02-15 03:22:32.215820 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-15 03:22:32.215842 | orchestrator | Sunday 15 February 2026 03:22:20 +0000 (0:00:01.041) 0:02:46.878 ******* 2026-02-15 03:22:32.215862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-15 03:22:32.215885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-15 03:22:32.215906 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:32.215925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-15 03:22:32.215945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-15 03:22:32.215964 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:32.215982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-02-15 03:22:32.216001 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-02-15 03:22:32.216021 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:32.216040 | orchestrator | 2026-02-15 03:22:32.216060 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-15 03:22:32.216080 | orchestrator | Sunday 15 February 2026 03:22:21 +0000 (0:00:00.882) 0:02:47.761 ******* 2026-02-15 03:22:32.216099 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:22:32.216117 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:22:32.216136 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:22:32.216155 | orchestrator | 2026-02-15 03:22:32.216175 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-15 03:22:32.216195 | orchestrator | Sunday 15 February 2026 03:22:22 +0000 (0:00:01.324) 0:02:49.085 ******* 2026-02-15 03:22:32.216213 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:22:32.216232 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:22:32.216250 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:22:32.216271 | orchestrator | 2026-02-15 03:22:32.216291 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-15 03:22:32.216311 | orchestrator | Sunday 15 February 2026 03:22:25 +0000 (0:00:02.190) 0:02:51.275 ******* 2026-02-15 03:22:32.216360 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:22:32.216380 | orchestrator | 2026-02-15 03:22:32.216418 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-15 03:22:32.216437 | orchestrator | Sunday 15 February 2026 03:22:26 +0000 (0:00:01.426) 0:02:52.702 ******* 2026-02-15 03:22:32.216456 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 03:22:32.216474 | orchestrator | 2026-02-15 03:22:32.216493 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-15 03:22:32.216512 | orchestrator | Sunday 15 February 2026 03:22:29 +0000 (0:00:03.238) 0:02:55.940 ******* 2026-02-15 03:22:32.216566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:22:32.216618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 03:22:32.216640 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:32.216668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:22:32.216701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 03:22:32.216721 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:32.216755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:22:34.701874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 03:22:34.701985 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:34.701998 | orchestrator | 2026-02-15 03:22:34.702007 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-15 03:22:34.702058 | orchestrator | Sunday 15 February 2026 03:22:32 +0000 (0:00:02.502) 0:02:58.443 ******* 2026-02-15 03:22:34.702085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:22:34.702095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 03:22:34.702103 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:34.702130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:22:34.702149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 03:22:34.702158 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:34.702165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:22:34.702178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 03:22:44.971921 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:44.972060 | orchestrator | 2026-02-15 03:22:44.972087 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-15 03:22:44.972107 | orchestrator | Sunday 15 February 2026 03:22:34 +0000 (0:00:02.489) 0:03:00.933 ******* 2026-02-15 03:22:44.972128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 03:22:44.972171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 03:22:44.972192 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:44.972211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 03:22:44.972229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 03:22:44.972247 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:44.972264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 03:22:44.972283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 03:22:44.972334 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:44.972356 | orchestrator | 2026-02-15 03:22:44.972375 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-15 03:22:44.972394 | orchestrator | Sunday 15 February 2026 03:22:37 +0000 (0:00:02.963) 0:03:03.897 ******* 2026-02-15 03:22:44.972405 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:22:44.972437 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:22:44.972452 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:22:44.972464 | orchestrator | 2026-02-15 03:22:44.972478 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-15 03:22:44.972490 | orchestrator | Sunday 15 February 2026 03:22:39 +0000 (0:00:02.158) 0:03:06.056 ******* 2026-02-15 03:22:44.972502 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:44.972514 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:44.972559 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:44.972571 | orchestrator | 2026-02-15 03:22:44.972620 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-15 03:22:44.972633 | orchestrator | Sunday 15 February 2026 03:22:41 +0000 (0:00:01.510) 0:03:07.566 ******* 2026-02-15 03:22:44.972646 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:44.972659 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:44.972671 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:44.972683 | orchestrator | 2026-02-15 03:22:44.972696 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-15 03:22:44.972709 | orchestrator | Sunday 15 February 2026 03:22:41 +0000 (0:00:00.332) 0:03:07.899 ******* 2026-02-15 03:22:44.972721 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:22:44.972734 | orchestrator | 2026-02-15 03:22:44.972746 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-15 03:22:44.972766 | orchestrator | Sunday 15 February 2026 03:22:43 +0000 (0:00:01.455) 0:03:09.355 ******* 2026-02-15 03:22:44.972781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-15 03:22:44.972797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-15 03:22:44.972808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-15 03:22:44.972837 | orchestrator | 2026-02-15 03:22:44.972856 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-15 03:22:44.972874 | orchestrator | Sunday 15 February 2026 03:22:44 +0000 (0:00:01.605) 0:03:10.960 ******* 2026-02-15 03:22:44.972903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-15 03:22:53.899523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-15 03:22:53.899638 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:53.899661 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:53.899680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.24.20251130', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-15 03:22:53.899698 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:53.899716 | orchestrator | 2026-02-15 03:22:53.899736 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-15 03:22:53.899756 | orchestrator | Sunday 15 February 2026 03:22:45 +0000 (0:00:00.445) 0:03:11.406 ******* 2026-02-15 03:22:53.899776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-15 03:22:53.899791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-15 03:22:53.899827 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:53.899838 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:53.899848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-15 03:22:53.899866 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:53.899882 | orchestrator | 2026-02-15 03:22:53.900010 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-15 03:22:53.900041 | orchestrator | Sunday 15 February 2026 03:22:46 +0000 (0:00:00.948) 0:03:12.354 ******* 2026-02-15 03:22:53.900058 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:53.900075 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:53.900093 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:53.900110 | orchestrator | 2026-02-15 03:22:53.900127 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-15 03:22:53.900144 | orchestrator | Sunday 15 February 2026 03:22:46 +0000 (0:00:00.557) 0:03:12.912 ******* 2026-02-15 03:22:53.900163 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:53.900180 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:53.900195 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:53.900206 | orchestrator | 2026-02-15 03:22:53.900217 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-15 03:22:53.900229 | orchestrator | Sunday 15 February 2026 03:22:48 +0000 (0:00:01.602) 0:03:14.515 ******* 2026-02-15 03:22:53.900239 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:53.900250 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:53.900262 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:22:53.900273 | orchestrator | 2026-02-15 03:22:53.900284 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-15 03:22:53.900296 | orchestrator | Sunday 15 February 2026 03:22:48 +0000 (0:00:00.338) 0:03:14.853 ******* 2026-02-15 03:22:53.900306 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:22:53.900317 | orchestrator | 2026-02-15 03:22:53.900328 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-15 03:22:53.900339 | orchestrator | Sunday 15 February 2026 03:22:50 +0000 (0:00:01.530) 0:03:16.384 ******* 2026-02-15 03:22:53.900377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 03:22:53.900390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:53.900421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:53.900441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:53.900458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-15 03:22:53.900515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.096282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:54.096394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:54.096437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 03:22:54.096453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.096467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:54.096523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.096545 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.096566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-15 03:22:54.096579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.096591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:54.096603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.096615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.096640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 03:22:54.247363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-15 03:22:54.247548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:54.247580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.247598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 03:22:54.247635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.247705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.247723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.247737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-15 03:22:54.247754 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:54.247771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.247794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:54.247820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:54.247847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.468660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:54.468751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:54.468765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.468776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.468838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:54.468850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-15 03:22:54.468878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.468889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:54.468903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-15 03:22:54.468918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:54.468933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:54.468964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 03:22:54.468994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.636271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:55.636389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 03:22:55.636415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:55.636462 | orchestrator | 2026-02-15 03:22:55.636510 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-15 03:22:55.636528 | orchestrator | Sunday 15 February 2026 03:22:54 +0000 (0:00:04.316) 0:03:20.700 ******* 2026-02-15 03:22:55.636566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 03:22:55.636606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.636624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.636641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.636659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-15 03:22:55.636697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.636716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:55.636735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:55.636873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.717715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:55.717831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 03:22:55.717901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.717922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.717942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-15 03:22:55.717985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.718004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:55.718098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.718127 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.718146 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-15 03:22:55.718166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 03:22:55.718235 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.815714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 03:22:55.815796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:55.815803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:55.815810 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:22:55.815816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.2.20251130', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.815821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:55.815836 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.815855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.815860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.815864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:55.815868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-02-15 03:22:55.815873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:55.815929 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:56.047668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-15 03:22:56.047750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.2.20251130', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:56.047778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:56.047787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:56.047795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:56.047803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:56.047842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 03:22:56.047854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:56.047866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:22:56.047873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 03:22:56.047881 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:22:56.047890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-02-15 03:22:56.047897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.2.20251130', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-02-15 03:22:56.047914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.2.20251130', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 03:23:07.196765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 03:23:07.196868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.2.20251130', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 03:23:07.196877 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:07.196885 | orchestrator | 2026-02-15 03:23:07.196892 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-15 03:23:07.196898 | orchestrator | Sunday 15 February 2026 03:22:56 +0000 (0:00:01.578) 0:03:22.279 ******* 2026-02-15 03:23:07.196905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-15 03:23:07.196913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-15 03:23:07.196920 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:23:07.196925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-15 03:23:07.196930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-15 03:23:07.196950 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:23:07.196956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-02-15 03:23:07.196961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-02-15 03:23:07.196966 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:07.196971 | orchestrator | 2026-02-15 03:23:07.196976 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-15 03:23:07.196982 | orchestrator | Sunday 15 February 2026 03:22:58 +0000 (0:00:02.163) 0:03:24.442 ******* 2026-02-15 03:23:07.196988 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:23:07.196993 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:23:07.196998 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:23:07.197003 | orchestrator | 2026-02-15 03:23:07.197008 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-15 03:23:07.197013 | orchestrator | Sunday 15 February 2026 03:22:59 +0000 (0:00:01.406) 0:03:25.848 ******* 2026-02-15 03:23:07.197018 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:23:07.197023 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:23:07.197028 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:23:07.197033 | orchestrator | 2026-02-15 03:23:07.197039 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-15 03:23:07.197044 | orchestrator | Sunday 15 February 2026 03:23:01 +0000 (0:00:02.111) 0:03:27.960 ******* 2026-02-15 03:23:07.197049 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:23:07.197054 | orchestrator | 2026-02-15 03:23:07.197059 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-15 03:23:07.197075 | orchestrator | Sunday 15 February 2026 03:23:03 +0000 (0:00:01.313) 0:03:29.274 ******* 2026-02-15 03:23:07.197082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 03:23:07.197092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 03:23:07.197097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 03:23:07.197108 | orchestrator | 2026-02-15 03:23:07.197114 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-15 03:23:07.197120 | orchestrator | Sunday 15 February 2026 03:23:06 +0000 (0:00:03.608) 0:03:32.882 ******* 2026-02-15 03:23:07.197125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 03:23:07.197131 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:23:07.197140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 03:23:17.848442 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:23:17.848628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 03:23:17.848708 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:17.848723 | orchestrator | 2026-02-15 03:23:17.848735 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-15 03:23:17.848749 | orchestrator | Sunday 15 February 2026 03:23:07 +0000 (0:00:00.546) 0:03:33.429 ******* 2026-02-15 03:23:17.848761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-15 03:23:17.848774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-15 03:23:17.848787 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:23:17.848799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-15 03:23:17.848810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-15 03:23:17.848821 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:23:17.848832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-15 03:23:17.848843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-02-15 03:23:17.848854 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:17.848865 | orchestrator | 2026-02-15 03:23:17.848877 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-15 03:23:17.848895 | orchestrator | Sunday 15 February 2026 03:23:07 +0000 (0:00:00.779) 0:03:34.209 ******* 2026-02-15 03:23:17.848915 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:23:17.848933 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:23:17.848951 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:23:17.848970 | orchestrator | 2026-02-15 03:23:17.848989 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-15 03:23:17.849010 | orchestrator | Sunday 15 February 2026 03:23:09 +0000 (0:00:01.996) 0:03:36.206 ******* 2026-02-15 03:23:17.849029 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:23:17.849049 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:23:17.849064 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:23:17.849076 | orchestrator | 2026-02-15 03:23:17.849089 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-15 03:23:17.849101 | orchestrator | Sunday 15 February 2026 03:23:11 +0000 (0:00:01.921) 0:03:38.127 ******* 2026-02-15 03:23:17.849114 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:23:17.849126 | orchestrator | 2026-02-15 03:23:17.849138 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-15 03:23:17.849151 | orchestrator | Sunday 15 February 2026 03:23:13 +0000 (0:00:01.572) 0:03:39.700 ******* 2026-02-15 03:23:17.849196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 03:23:17.849224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:23:17.849239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:23:17.849254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 03:23:17.849267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:23:17.849297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:23:18.901568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 03:23:18.901667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:23:18.901682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:23:18.901694 | orchestrator | 2026-02-15 03:23:18.901706 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-15 03:23:18.901717 | orchestrator | Sunday 15 February 2026 03:23:17 +0000 (0:00:04.378) 0:03:44.078 ******* 2026-02-15 03:23:18.901729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 03:23:18.901782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:23:18.901795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:23:18.901806 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:23:18.901818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 03:23:18.901828 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:23:18.901839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:23:18.901857 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:23:18.901880 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 03:23:32.315980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 03:23:32.316067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 03:23:32.316078 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:32.316089 | orchestrator | 2026-02-15 03:23:32.316098 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-15 03:23:32.316107 | orchestrator | Sunday 15 February 2026 03:23:18 +0000 (0:00:01.050) 0:03:45.129 ******* 2026-02-15 03:23:32.316115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316135 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316171 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:23:32.316179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316229 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:23:32.316319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-02-15 03:23:32.316363 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:32.316371 | orchestrator | 2026-02-15 03:23:32.316379 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-15 03:23:32.316387 | orchestrator | Sunday 15 February 2026 03:23:20 +0000 (0:00:01.295) 0:03:46.425 ******* 2026-02-15 03:23:32.316394 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:23:32.316401 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:23:32.316408 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:23:32.316416 | orchestrator | 2026-02-15 03:23:32.316423 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-15 03:23:32.316431 | orchestrator | Sunday 15 February 2026 03:23:21 +0000 (0:00:01.463) 0:03:47.888 ******* 2026-02-15 03:23:32.316438 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:23:32.316445 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:23:32.316453 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:23:32.316460 | orchestrator | 2026-02-15 03:23:32.316467 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-15 03:23:32.316475 | orchestrator | Sunday 15 February 2026 03:23:23 +0000 (0:00:02.159) 0:03:50.047 ******* 2026-02-15 03:23:32.316482 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:23:32.316490 | orchestrator | 2026-02-15 03:23:32.316497 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-15 03:23:32.316504 | orchestrator | Sunday 15 February 2026 03:23:25 +0000 (0:00:01.692) 0:03:51.740 ******* 2026-02-15 03:23:32.316512 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-15 03:23:32.316537 | orchestrator | 2026-02-15 03:23:32.316551 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-15 03:23:32.316559 | orchestrator | Sunday 15 February 2026 03:23:26 +0000 (0:00:00.919) 0:03:52.659 ******* 2026-02-15 03:23:32.316568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-15 03:23:32.316577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-15 03:23:32.316585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-15 03:23:32.316593 | orchestrator | 2026-02-15 03:23:32.316601 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-15 03:23:32.316608 | orchestrator | Sunday 15 February 2026 03:23:30 +0000 (0:00:04.386) 0:03:57.045 ******* 2026-02-15 03:23:32.316633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 03:23:32.316643 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:23:32.316657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 03:23:51.791868 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:23:51.792005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 03:23:51.792034 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:51.792052 | orchestrator | 2026-02-15 03:23:51.792070 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-15 03:23:51.792088 | orchestrator | Sunday 15 February 2026 03:23:32 +0000 (0:00:01.502) 0:03:58.548 ******* 2026-02-15 03:23:51.792133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 03:23:51.792155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 03:23:51.792173 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:23:51.792191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 03:23:51.792257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 03:23:51.792278 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:23:51.792294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 03:23:51.792313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 03:23:51.792329 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:51.792344 | orchestrator | 2026-02-15 03:23:51.792360 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-15 03:23:51.792378 | orchestrator | Sunday 15 February 2026 03:23:33 +0000 (0:00:01.681) 0:04:00.230 ******* 2026-02-15 03:23:51.792395 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:23:51.792412 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:23:51.792429 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:23:51.792446 | orchestrator | 2026-02-15 03:23:51.792463 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-15 03:23:51.792479 | orchestrator | Sunday 15 February 2026 03:23:36 +0000 (0:00:02.662) 0:04:02.893 ******* 2026-02-15 03:23:51.792496 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:23:51.792512 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:23:51.792529 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:23:51.792545 | orchestrator | 2026-02-15 03:23:51.792562 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-15 03:23:51.792579 | orchestrator | Sunday 15 February 2026 03:23:39 +0000 (0:00:02.996) 0:04:05.889 ******* 2026-02-15 03:23:51.792599 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-15 03:23:51.792617 | orchestrator | 2026-02-15 03:23:51.792634 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-15 03:23:51.792671 | orchestrator | Sunday 15 February 2026 03:23:40 +0000 (0:00:01.216) 0:04:07.106 ******* 2026-02-15 03:23:51.792693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 03:23:51.792725 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:23:51.792766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 03:23:51.792785 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:23:51.792804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 03:23:51.792822 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:51.792838 | orchestrator | 2026-02-15 03:23:51.792853 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-15 03:23:51.792864 | orchestrator | Sunday 15 February 2026 03:23:41 +0000 (0:00:01.100) 0:04:08.207 ******* 2026-02-15 03:23:51.792874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 03:23:51.792884 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:23:51.792894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 03:23:51.792905 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:23:51.792915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 03:23:51.792926 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:51.792936 | orchestrator | 2026-02-15 03:23:51.792946 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-15 03:23:51.792956 | orchestrator | Sunday 15 February 2026 03:23:43 +0000 (0:00:01.328) 0:04:09.535 ******* 2026-02-15 03:23:51.792966 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:23:51.792976 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:23:51.792986 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:23:51.792996 | orchestrator | 2026-02-15 03:23:51.793007 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-15 03:23:51.793023 | orchestrator | Sunday 15 February 2026 03:23:44 +0000 (0:00:01.641) 0:04:11.176 ******* 2026-02-15 03:23:51.793041 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:23:51.793052 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:23:51.793062 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:23:51.793072 | orchestrator | 2026-02-15 03:23:51.793082 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-15 03:23:51.793092 | orchestrator | Sunday 15 February 2026 03:23:47 +0000 (0:00:02.795) 0:04:13.971 ******* 2026-02-15 03:23:51.793102 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:23:51.793112 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:23:51.793122 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:23:51.793132 | orchestrator | 2026-02-15 03:23:51.793142 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-15 03:23:51.793152 | orchestrator | Sunday 15 February 2026 03:23:50 +0000 (0:00:02.772) 0:04:16.744 ******* 2026-02-15 03:23:51.793162 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-15 03:23:51.793172 | orchestrator | 2026-02-15 03:23:51.793189 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-15 03:24:07.833060 | orchestrator | Sunday 15 February 2026 03:23:51 +0000 (0:00:01.277) 0:04:18.022 ******* 2026-02-15 03:24:07.833194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 03:24:07.833210 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:07.833221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 03:24:07.833230 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:07.833238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 03:24:07.833247 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:07.833255 | orchestrator | 2026-02-15 03:24:07.833265 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-15 03:24:07.833274 | orchestrator | Sunday 15 February 2026 03:23:53 +0000 (0:00:01.396) 0:04:19.418 ******* 2026-02-15 03:24:07.833286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 03:24:07.833330 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:07.833344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 03:24:07.833358 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:07.833389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 03:24:07.833404 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:07.833416 | orchestrator | 2026-02-15 03:24:07.833429 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-15 03:24:07.833442 | orchestrator | Sunday 15 February 2026 03:23:54 +0000 (0:00:01.457) 0:04:20.876 ******* 2026-02-15 03:24:07.833453 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:07.833464 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:07.833476 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:07.833489 | orchestrator | 2026-02-15 03:24:07.833503 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-15 03:24:07.833536 | orchestrator | Sunday 15 February 2026 03:23:56 +0000 (0:00:01.946) 0:04:22.823 ******* 2026-02-15 03:24:07.833546 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:24:07.833554 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:24:07.833562 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:24:07.833570 | orchestrator | 2026-02-15 03:24:07.833577 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-15 03:24:07.833585 | orchestrator | Sunday 15 February 2026 03:23:59 +0000 (0:00:02.573) 0:04:25.396 ******* 2026-02-15 03:24:07.833594 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:24:07.833601 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:24:07.833609 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:24:07.833617 | orchestrator | 2026-02-15 03:24:07.833625 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-15 03:24:07.833633 | orchestrator | Sunday 15 February 2026 03:24:02 +0000 (0:00:03.423) 0:04:28.820 ******* 2026-02-15 03:24:07.833641 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:24:07.833649 | orchestrator | 2026-02-15 03:24:07.833657 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-15 03:24:07.833665 | orchestrator | Sunday 15 February 2026 03:24:04 +0000 (0:00:01.433) 0:04:30.253 ******* 2026-02-15 03:24:07.833675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 03:24:07.833691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 03:24:07.833701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 03:24:07.833716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 03:24:07.833733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:24:08.560225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 03:24:08.560326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 03:24:08.560364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 03:24:08.560377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 03:24:08.560388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 03:24:08.560416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 03:24:08.560435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 03:24:08.560447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 03:24:08.560466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:24:08.560619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:24:08.560644 | orchestrator | 2026-02-15 03:24:08.560657 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-15 03:24:08.560668 | orchestrator | Sunday 15 February 2026 03:24:07 +0000 (0:00:03.931) 0:04:34.185 ******* 2026-02-15 03:24:08.560695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 03:24:08.560717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 03:24:08.708796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 03:24:08.708933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 03:24:08.708992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:24:08.709017 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:08.709040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 03:24:08.709081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 03:24:08.709104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 03:24:08.709180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 03:24:08.709204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:24:08.709239 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:08.709260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 03:24:08.709280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 03:24:08.709309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 03:24:08.709332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 03:24:08.709369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 03:24:20.939020 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:20.939156 | orchestrator | 2026-02-15 03:24:20.939170 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-15 03:24:20.939180 | orchestrator | Sunday 15 February 2026 03:24:08 +0000 (0:00:00.762) 0:04:34.947 ******* 2026-02-15 03:24:20.939188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 03:24:20.939198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 03:24:20.939208 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:20.939216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 03:24:20.939224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 03:24:20.939231 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:20.939239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 03:24:20.939246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 03:24:20.939254 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:20.939261 | orchestrator | 2026-02-15 03:24:20.939269 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-15 03:24:20.939276 | orchestrator | Sunday 15 February 2026 03:24:09 +0000 (0:00:00.935) 0:04:35.883 ******* 2026-02-15 03:24:20.939283 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:24:20.939291 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:24:20.939298 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:24:20.939305 | orchestrator | 2026-02-15 03:24:20.939312 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-15 03:24:20.939320 | orchestrator | Sunday 15 February 2026 03:24:11 +0000 (0:00:01.842) 0:04:37.725 ******* 2026-02-15 03:24:20.939328 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:24:20.939335 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:24:20.939342 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:24:20.939349 | orchestrator | 2026-02-15 03:24:20.939357 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-15 03:24:20.939364 | orchestrator | Sunday 15 February 2026 03:24:13 +0000 (0:00:02.122) 0:04:39.847 ******* 2026-02-15 03:24:20.939371 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:24:20.939379 | orchestrator | 2026-02-15 03:24:20.939387 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-15 03:24:20.939394 | orchestrator | Sunday 15 February 2026 03:24:15 +0000 (0:00:01.566) 0:04:41.414 ******* 2026-02-15 03:24:20.939417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:24:20.939464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:24:20.939474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:24:20.939483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:24:20.939496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:24:20.939517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:24:23.020828 | orchestrator | 2026-02-15 03:24:23.020903 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-15 03:24:23.020911 | orchestrator | Sunday 15 February 2026 03:24:20 +0000 (0:00:05.754) 0:04:47.168 ******* 2026-02-15 03:24:23.020918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-15 03:24:23.020927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-15 03:24:23.020934 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:23.020951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-15 03:24:23.020971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-15 03:24:23.020987 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:23.020992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-15 03:24:23.020997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-15 03:24:23.021002 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:23.021007 | orchestrator | 2026-02-15 03:24:23.021012 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-15 03:24:23.021017 | orchestrator | Sunday 15 February 2026 03:24:22 +0000 (0:00:01.110) 0:04:48.279 ******* 2026-02-15 03:24:23.021023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-15 03:24:23.021034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-15 03:24:23.021046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-15 03:24:23.021052 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:23.021057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-15 03:24:23.021061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-15 03:24:23.021066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-15 03:24:23.021071 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:23.021099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-02-15 03:24:23.021105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-15 03:24:23.021117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-02-15 03:24:29.877390 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:29.877491 | orchestrator | 2026-02-15 03:24:29.877503 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-15 03:24:29.877512 | orchestrator | Sunday 15 February 2026 03:24:23 +0000 (0:00:00.974) 0:04:49.253 ******* 2026-02-15 03:24:29.877520 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:29.877527 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:29.877535 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:29.877542 | orchestrator | 2026-02-15 03:24:29.877550 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-15 03:24:29.877557 | orchestrator | Sunday 15 February 2026 03:24:23 +0000 (0:00:00.461) 0:04:49.715 ******* 2026-02-15 03:24:29.877565 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:29.877572 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:29.877579 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:29.877587 | orchestrator | 2026-02-15 03:24:29.877594 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-15 03:24:29.877601 | orchestrator | Sunday 15 February 2026 03:24:25 +0000 (0:00:01.834) 0:04:51.549 ******* 2026-02-15 03:24:29.877609 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:24:29.877616 | orchestrator | 2026-02-15 03:24:29.877624 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-15 03:24:29.877631 | orchestrator | Sunday 15 February 2026 03:24:27 +0000 (0:00:01.888) 0:04:53.437 ******* 2026-02-15 03:24:29.877641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-15 03:24:29.877671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 03:24:29.877692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:29.877700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-15 03:24:29.877721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:29.877730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 03:24:29.877739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 03:24:29.877752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:29.877759 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:29.877771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 03:24:29.877779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-15 03:24:29.877787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 03:24:29.877799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:31.517037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:31.517188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 03:24:31.517215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-15 03:24:31.517223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-15 03:24:31.517229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:31.517235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:31.517251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-15 03:24:31.517261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 03:24:31.517270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-15 03:24:31.517275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:31.517280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:31.517285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 03:24:31.517296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-15 03:24:32.287304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-15 03:24:32.287407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:32.287419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:32.287429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 03:24:32.287437 | orchestrator | 2026-02-15 03:24:32.287447 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-15 03:24:32.287455 | orchestrator | Sunday 15 February 2026 03:24:31 +0000 (0:00:04.493) 0:04:57.931 ******* 2026-02-15 03:24:32.287464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-15 03:24:32.287494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 03:24:32.287527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:32.287540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:32.287560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 03:24:32.287577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-15 03:24:32.287591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-15 03:24:32.287622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-15 03:24:32.471255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:32.471329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 03:24:32.471349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:32.471354 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 03:24:32.471359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:32.471364 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:32.471370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:32.471394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 03:24:32.471412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-15 03:24:32.471421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-15 03:24:32.471426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:32.471431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-15 03:24:32.471440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:32.471444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 03:24:32.471453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 03:24:34.517371 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:34.517475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:34.517515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:34.517530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 03:24:34.517546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-15 03:24:34.517583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:1.7.0.20251130', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-02-15 03:24:34.517597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:34.517627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 03:24:34.517645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 03:24:34.517658 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:34.517669 | orchestrator | 2026-02-15 03:24:34.517682 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-15 03:24:34.517695 | orchestrator | Sunday 15 February 2026 03:24:32 +0000 (0:00:00.946) 0:04:58.877 ******* 2026-02-15 03:24:34.517708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-15 03:24:34.517723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-15 03:24:34.517737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-15 03:24:34.517758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-15 03:24:34.517771 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:34.517782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-15 03:24:34.517794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-15 03:24:34.517806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-15 03:24:34.517817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-15 03:24:34.517829 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:34.517840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-02-15 03:24:34.517851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-02-15 03:24:34.517863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-15 03:24:34.517889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-02-15 03:24:42.402567 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:42.402686 | orchestrator | 2026-02-15 03:24:42.402713 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-15 03:24:42.402733 | orchestrator | Sunday 15 February 2026 03:24:34 +0000 (0:00:01.866) 0:05:00.744 ******* 2026-02-15 03:24:42.402751 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:42.402770 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:42.402789 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:42.402808 | orchestrator | 2026-02-15 03:24:42.402828 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-15 03:24:42.402848 | orchestrator | Sunday 15 February 2026 03:24:34 +0000 (0:00:00.487) 0:05:01.232 ******* 2026-02-15 03:24:42.402867 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:42.402886 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:42.402897 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:42.402908 | orchestrator | 2026-02-15 03:24:42.402920 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-15 03:24:42.402931 | orchestrator | Sunday 15 February 2026 03:24:36 +0000 (0:00:01.510) 0:05:02.743 ******* 2026-02-15 03:24:42.402968 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:24:42.402979 | orchestrator | 2026-02-15 03:24:42.402991 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-15 03:24:42.403037 | orchestrator | Sunday 15 February 2026 03:24:38 +0000 (0:00:01.844) 0:05:04.587 ******* 2026-02-15 03:24:42.403055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:24:42.403143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:24:42.403288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:24:42.403316 | orchestrator | 2026-02-15 03:24:42.403329 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-15 03:24:42.403363 | orchestrator | Sunday 15 February 2026 03:24:40 +0000 (0:00:02.286) 0:05:06.873 ******* 2026-02-15 03:24:42.403381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 03:24:42.403405 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:42.403417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 03:24:42.403429 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:42.403440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 03:24:42.403452 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:42.403508 | orchestrator | 2026-02-15 03:24:42.403520 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-15 03:24:42.403531 | orchestrator | Sunday 15 February 2026 03:24:41 +0000 (0:00:00.411) 0:05:07.285 ******* 2026-02-15 03:24:42.403544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-15 03:24:42.403556 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:42.403567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-15 03:24:42.403578 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:42.403589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-15 03:24:42.403600 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:42.403610 | orchestrator | 2026-02-15 03:24:42.403621 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-15 03:24:42.403633 | orchestrator | Sunday 15 February 2026 03:24:41 +0000 (0:00:00.740) 0:05:08.025 ******* 2026-02-15 03:24:42.403658 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:53.325271 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:53.325387 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:53.325405 | orchestrator | 2026-02-15 03:24:53.325419 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-15 03:24:53.325432 | orchestrator | Sunday 15 February 2026 03:24:42 +0000 (0:00:00.882) 0:05:08.907 ******* 2026-02-15 03:24:53.325444 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:53.325456 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:53.325467 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:53.325478 | orchestrator | 2026-02-15 03:24:53.325490 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-15 03:24:53.325503 | orchestrator | Sunday 15 February 2026 03:24:44 +0000 (0:00:01.411) 0:05:10.319 ******* 2026-02-15 03:24:53.325531 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:24:53.325543 | orchestrator | 2026-02-15 03:24:53.325555 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-15 03:24:53.325566 | orchestrator | Sunday 15 February 2026 03:24:45 +0000 (0:00:01.697) 0:05:12.016 ******* 2026-02-15 03:24:53.325581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 03:24:53.325596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 03:24:53.325609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 03:24:53.325670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 03:24:53.325694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 03:24:53.325707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 03:24:53.325719 | orchestrator | 2026-02-15 03:24:53.325730 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-15 03:24:53.325741 | orchestrator | Sunday 15 February 2026 03:24:52 +0000 (0:00:06.410) 0:05:18.427 ******* 2026-02-15 03:24:53.325753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-15 03:24:53.325783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-15 03:24:59.605550 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:59.605661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-15 03:24:59.605675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-15 03:24:59.605684 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:59.605693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-15 03:24:59.605717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-15 03:24:59.605726 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:59.605734 | orchestrator | 2026-02-15 03:24:59.605742 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-15 03:24:59.605751 | orchestrator | Sunday 15 February 2026 03:24:53 +0000 (0:00:01.132) 0:05:19.559 ******* 2026-02-15 03:24:59.605773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-15 03:24:59.605799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-15 03:24:59.605818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-15 03:24:59.605831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-15 03:24:59.605843 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:59.605854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-15 03:24:59.605866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-15 03:24:59.605879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-15 03:24:59.605891 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-15 03:24:59.605904 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:59.605918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-15 03:24:59.605926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-02-15 03:24:59.605933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-15 03:24:59.606066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-02-15 03:24:59.606078 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:59.606086 | orchestrator | 2026-02-15 03:24:59.606095 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-15 03:24:59.606104 | orchestrator | Sunday 15 February 2026 03:24:54 +0000 (0:00:01.051) 0:05:20.610 ******* 2026-02-15 03:24:59.606112 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:24:59.606121 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:24:59.606129 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:24:59.606138 | orchestrator | 2026-02-15 03:24:59.606147 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-15 03:24:59.606155 | orchestrator | Sunday 15 February 2026 03:24:55 +0000 (0:00:01.331) 0:05:21.941 ******* 2026-02-15 03:24:59.606163 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:24:59.606172 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:24:59.606181 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:24:59.606189 | orchestrator | 2026-02-15 03:24:59.606197 | orchestrator | TASK [include_role : swift] **************************************************** 2026-02-15 03:24:59.606209 | orchestrator | Sunday 15 February 2026 03:24:58 +0000 (0:00:02.409) 0:05:24.351 ******* 2026-02-15 03:24:59.606222 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:59.606243 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:59.606256 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:59.606268 | orchestrator | 2026-02-15 03:24:59.606281 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-15 03:24:59.606293 | orchestrator | Sunday 15 February 2026 03:24:58 +0000 (0:00:00.732) 0:05:25.084 ******* 2026-02-15 03:24:59.606304 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:59.606316 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:24:59.606328 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:24:59.606340 | orchestrator | 2026-02-15 03:24:59.606353 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-15 03:24:59.606365 | orchestrator | Sunday 15 February 2026 03:24:59 +0000 (0:00:00.397) 0:05:25.481 ******* 2026-02-15 03:24:59.606378 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:24:59.606403 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:25:46.253530 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:25:46.253665 | orchestrator | 2026-02-15 03:25:46.253690 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-15 03:25:46.253711 | orchestrator | Sunday 15 February 2026 03:24:59 +0000 (0:00:00.363) 0:05:25.844 ******* 2026-02-15 03:25:46.253729 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:25:46.253749 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:25:46.253854 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:25:46.253878 | orchestrator | 2026-02-15 03:25:46.253894 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-15 03:25:46.253912 | orchestrator | Sunday 15 February 2026 03:24:59 +0000 (0:00:00.315) 0:05:26.160 ******* 2026-02-15 03:25:46.253930 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:25:46.253971 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:25:46.253989 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:25:46.254008 | orchestrator | 2026-02-15 03:25:46.254106 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-15 03:25:46.254126 | orchestrator | Sunday 15 February 2026 03:25:00 +0000 (0:00:00.698) 0:05:26.859 ******* 2026-02-15 03:25:46.254144 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:25:46.254164 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:25:46.254183 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:25:46.254233 | orchestrator | 2026-02-15 03:25:46.254254 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-15 03:25:46.254272 | orchestrator | Sunday 15 February 2026 03:25:01 +0000 (0:00:00.595) 0:05:27.454 ******* 2026-02-15 03:25:46.254290 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:25:46.254309 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:25:46.254327 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:25:46.254346 | orchestrator | 2026-02-15 03:25:46.254364 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-15 03:25:46.254382 | orchestrator | Sunday 15 February 2026 03:25:01 +0000 (0:00:00.702) 0:05:28.156 ******* 2026-02-15 03:25:46.254400 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:25:46.254417 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:25:46.254434 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:25:46.254450 | orchestrator | 2026-02-15 03:25:46.254467 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-15 03:25:46.254485 | orchestrator | Sunday 15 February 2026 03:25:02 +0000 (0:00:00.386) 0:05:28.542 ******* 2026-02-15 03:25:46.254501 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:25:46.254518 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:25:46.254535 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:25:46.254552 | orchestrator | 2026-02-15 03:25:46.254569 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-15 03:25:46.254587 | orchestrator | Sunday 15 February 2026 03:25:03 +0000 (0:00:01.351) 0:05:29.894 ******* 2026-02-15 03:25:46.254604 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:25:46.254621 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:25:46.254638 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:25:46.254657 | orchestrator | 2026-02-15 03:25:46.254676 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-15 03:25:46.254693 | orchestrator | Sunday 15 February 2026 03:25:04 +0000 (0:00:00.848) 0:05:30.743 ******* 2026-02-15 03:25:46.254711 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:25:46.254730 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:25:46.254748 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:25:46.254766 | orchestrator | 2026-02-15 03:25:46.254816 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-15 03:25:46.254834 | orchestrator | Sunday 15 February 2026 03:25:05 +0000 (0:00:00.843) 0:05:31.587 ******* 2026-02-15 03:25:46.254852 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:25:46.254870 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:25:46.254888 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:25:46.254906 | orchestrator | 2026-02-15 03:25:46.254923 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-15 03:25:46.254941 | orchestrator | Sunday 15 February 2026 03:25:15 +0000 (0:00:09.905) 0:05:41.492 ******* 2026-02-15 03:25:46.254959 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:25:46.254975 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:25:46.254992 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:25:46.255011 | orchestrator | 2026-02-15 03:25:46.255028 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-15 03:25:46.255047 | orchestrator | Sunday 15 February 2026 03:25:16 +0000 (0:00:01.222) 0:05:42.715 ******* 2026-02-15 03:25:46.255065 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:25:46.255084 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:25:46.255102 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:25:46.255120 | orchestrator | 2026-02-15 03:25:46.255137 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-15 03:25:46.255154 | orchestrator | Sunday 15 February 2026 03:25:27 +0000 (0:00:11.001) 0:05:53.717 ******* 2026-02-15 03:25:46.255172 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:25:46.255188 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:25:46.255204 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:25:46.255220 | orchestrator | 2026-02-15 03:25:46.255237 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-15 03:25:46.255282 | orchestrator | Sunday 15 February 2026 03:25:32 +0000 (0:00:04.783) 0:05:58.500 ******* 2026-02-15 03:25:46.255300 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:25:46.255317 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:25:46.255337 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:25:46.255355 | orchestrator | 2026-02-15 03:25:46.255373 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-15 03:25:46.255391 | orchestrator | Sunday 15 February 2026 03:25:36 +0000 (0:00:04.571) 0:06:03.072 ******* 2026-02-15 03:25:46.255410 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:25:46.255428 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:25:46.255447 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:25:46.255466 | orchestrator | 2026-02-15 03:25:46.255485 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-15 03:25:46.255503 | orchestrator | Sunday 15 February 2026 03:25:37 +0000 (0:00:00.758) 0:06:03.831 ******* 2026-02-15 03:25:46.255520 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:25:46.255539 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:25:46.255557 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:25:46.255574 | orchestrator | 2026-02-15 03:25:46.255624 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-15 03:25:46.255643 | orchestrator | Sunday 15 February 2026 03:25:37 +0000 (0:00:00.368) 0:06:04.199 ******* 2026-02-15 03:25:46.255660 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:25:46.255677 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:25:46.255694 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:25:46.255712 | orchestrator | 2026-02-15 03:25:46.255746 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-15 03:25:46.255765 | orchestrator | Sunday 15 February 2026 03:25:38 +0000 (0:00:00.408) 0:06:04.608 ******* 2026-02-15 03:25:46.255862 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:25:46.255881 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:25:46.255898 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:25:46.255915 | orchestrator | 2026-02-15 03:25:46.255932 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-15 03:25:46.255950 | orchestrator | Sunday 15 February 2026 03:25:38 +0000 (0:00:00.394) 0:06:05.002 ******* 2026-02-15 03:25:46.255966 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:25:46.255983 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:25:46.256002 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:25:46.256021 | orchestrator | 2026-02-15 03:25:46.256039 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-15 03:25:46.256057 | orchestrator | Sunday 15 February 2026 03:25:39 +0000 (0:00:00.541) 0:06:05.543 ******* 2026-02-15 03:25:46.256074 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:25:46.256092 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:25:46.256109 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:25:46.256126 | orchestrator | 2026-02-15 03:25:46.256144 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-15 03:25:46.256162 | orchestrator | Sunday 15 February 2026 03:25:39 +0000 (0:00:00.326) 0:06:05.869 ******* 2026-02-15 03:25:46.256179 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:25:46.256198 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:25:46.256216 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:25:46.256235 | orchestrator | 2026-02-15 03:25:46.256254 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-15 03:25:46.256271 | orchestrator | Sunday 15 February 2026 03:25:44 +0000 (0:00:04.844) 0:06:10.714 ******* 2026-02-15 03:25:46.256288 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:25:46.256304 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:25:46.256320 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:25:46.256336 | orchestrator | 2026-02-15 03:25:46.256351 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:25:46.256369 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-15 03:25:46.256404 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-15 03:25:46.256421 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-02-15 03:25:46.256438 | orchestrator | 2026-02-15 03:25:46.256454 | orchestrator | 2026-02-15 03:25:46.256471 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:25:46.256487 | orchestrator | Sunday 15 February 2026 03:25:45 +0000 (0:00:00.855) 0:06:11.569 ******* 2026-02-15 03:25:46.256504 | orchestrator | =============================================================================== 2026-02-15 03:25:46.256520 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.00s 2026-02-15 03:25:46.256536 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 9.91s 2026-02-15 03:25:46.256552 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.41s 2026-02-15 03:25:46.256568 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.75s 2026-02-15 03:25:46.256584 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.89s 2026-02-15 03:25:46.256601 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.84s 2026-02-15 03:25:46.256617 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.78s 2026-02-15 03:25:46.256633 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.76s 2026-02-15 03:25:46.256649 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.57s 2026-02-15 03:25:46.256665 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.49s 2026-02-15 03:25:46.256681 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.39s 2026-02-15 03:25:46.256697 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.38s 2026-02-15 03:25:46.256714 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.32s 2026-02-15 03:25:46.256730 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.06s 2026-02-15 03:25:46.256746 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.93s 2026-02-15 03:25:46.256762 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 3.80s 2026-02-15 03:25:46.256845 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.64s 2026-02-15 03:25:46.256862 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.61s 2026-02-15 03:25:46.256878 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.60s 2026-02-15 03:25:46.256893 | orchestrator | haproxy-config : Add configuration for glance when using single external frontend --- 3.59s 2026-02-15 03:25:48.733688 | orchestrator | 2026-02-15 03:25:48 | INFO  | Task c6da1c5a-5380-40fc-87af-5de28ee8269f (opensearch) was prepared for execution. 2026-02-15 03:25:48.733871 | orchestrator | 2026-02-15 03:25:48 | INFO  | It takes a moment until task c6da1c5a-5380-40fc-87af-5de28ee8269f (opensearch) has been started and output is visible here. 2026-02-15 03:25:59.850601 | orchestrator | 2026-02-15 03:25:59.850812 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 03:25:59.850836 | orchestrator | 2026-02-15 03:25:59.850849 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 03:25:59.850861 | orchestrator | Sunday 15 February 2026 03:25:53 +0000 (0:00:00.270) 0:00:00.270 ******* 2026-02-15 03:25:59.850873 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:25:59.850885 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:25:59.850896 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:25:59.850907 | orchestrator | 2026-02-15 03:25:59.850919 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 03:25:59.850955 | orchestrator | Sunday 15 February 2026 03:25:53 +0000 (0:00:00.345) 0:00:00.616 ******* 2026-02-15 03:25:59.850968 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-15 03:25:59.850979 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-15 03:25:59.850990 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-15 03:25:59.851001 | orchestrator | 2026-02-15 03:25:59.851012 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-15 03:25:59.851023 | orchestrator | 2026-02-15 03:25:59.851034 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-15 03:25:59.851045 | orchestrator | Sunday 15 February 2026 03:25:53 +0000 (0:00:00.455) 0:00:01.071 ******* 2026-02-15 03:25:59.851056 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:25:59.851067 | orchestrator | 2026-02-15 03:25:59.851078 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-15 03:25:59.851123 | orchestrator | Sunday 15 February 2026 03:25:54 +0000 (0:00:00.518) 0:00:01.590 ******* 2026-02-15 03:25:59.851138 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-15 03:25:59.851151 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-15 03:25:59.851164 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-15 03:25:59.851176 | orchestrator | 2026-02-15 03:25:59.851188 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-15 03:25:59.851201 | orchestrator | Sunday 15 February 2026 03:25:55 +0000 (0:00:00.682) 0:00:02.272 ******* 2026-02-15 03:25:59.851217 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:25:59.851234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:25:59.851274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:25:59.851301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:25:59.851317 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:25:59.851332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:25:59.851346 | orchestrator | 2026-02-15 03:25:59.851359 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-15 03:25:59.851372 | orchestrator | Sunday 15 February 2026 03:25:56 +0000 (0:00:01.791) 0:00:04.064 ******* 2026-02-15 03:25:59.851385 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:25:59.851404 | orchestrator | 2026-02-15 03:25:59.851417 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-15 03:25:59.851431 | orchestrator | Sunday 15 February 2026 03:25:57 +0000 (0:00:00.570) 0:00:04.634 ******* 2026-02-15 03:25:59.851458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:26:00.751188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:26:00.751281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:26:00.751290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:26:00.751307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:26:00.751339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:26:00.751345 | orchestrator | 2026-02-15 03:26:00.751350 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-15 03:26:00.751355 | orchestrator | Sunday 15 February 2026 03:25:59 +0000 (0:00:02.371) 0:00:07.005 ******* 2026-02-15 03:26:00.751360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-15 03:26:00.751364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-15 03:26:00.751372 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:26:00.751381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-15 03:26:00.751390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-15 03:26:01.852140 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:26:01.852268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-15 03:26:01.852300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-15 03:26:01.852359 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:26:01.852380 | orchestrator | 2026-02-15 03:26:01.852401 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-15 03:26:01.852421 | orchestrator | Sunday 15 February 2026 03:26:00 +0000 (0:00:00.897) 0:00:07.903 ******* 2026-02-15 03:26:01.852459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-15 03:26:01.852483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-15 03:26:01.852528 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:26:01.852550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-15 03:26:01.852564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-15 03:26:01.852586 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:26:01.852599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-02-15 03:26:01.852617 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-02-15 03:26:01.852632 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:26:01.852892 | orchestrator | 2026-02-15 03:26:01.852926 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-15 03:26:01.852963 | orchestrator | Sunday 15 February 2026 03:26:01 +0000 (0:00:01.094) 0:00:08.997 ******* 2026-02-15 03:26:10.152189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:26:10.152282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:26:10.152324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:26:10.152350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:26:10.152379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:26:10.152391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:26:10.152408 | orchestrator | 2026-02-15 03:26:10.152420 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-15 03:26:10.152430 | orchestrator | Sunday 15 February 2026 03:26:04 +0000 (0:00:02.490) 0:00:11.488 ******* 2026-02-15 03:26:10.152439 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:26:10.152449 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:26:10.152458 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:26:10.152467 | orchestrator | 2026-02-15 03:26:10.152476 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-15 03:26:10.152485 | orchestrator | Sunday 15 February 2026 03:26:06 +0000 (0:00:02.367) 0:00:13.856 ******* 2026-02-15 03:26:10.152494 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:26:10.152503 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:26:10.152511 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:26:10.152520 | orchestrator | 2026-02-15 03:26:10.152529 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-02-15 03:26:10.152538 | orchestrator | Sunday 15 February 2026 03:26:08 +0000 (0:00:01.838) 0:00:15.694 ******* 2026-02-15 03:26:10.152551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:26:10.152562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:26:10.152578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.4.20251130', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-02-15 03:28:52.130992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:28:52.131162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:28:52.131195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-02-15 03:28:52.131217 | orchestrator | 2026-02-15 03:28:52.131239 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-15 03:28:52.131319 | orchestrator | Sunday 15 February 2026 03:26:10 +0000 (0:00:01.613) 0:00:17.308 ******* 2026-02-15 03:28:52.131343 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:28:52.131365 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:28:52.131386 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:28:52.131407 | orchestrator | 2026-02-15 03:28:52.131427 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-15 03:28:52.131477 | orchestrator | Sunday 15 February 2026 03:26:10 +0000 (0:00:00.413) 0:00:17.722 ******* 2026-02-15 03:28:52.131498 | orchestrator | 2026-02-15 03:28:52.131519 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-15 03:28:52.131539 | orchestrator | Sunday 15 February 2026 03:26:10 +0000 (0:00:00.064) 0:00:17.786 ******* 2026-02-15 03:28:52.131559 | orchestrator | 2026-02-15 03:28:52.131578 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-15 03:28:52.131598 | orchestrator | Sunday 15 February 2026 03:26:10 +0000 (0:00:00.074) 0:00:17.861 ******* 2026-02-15 03:28:52.131618 | orchestrator | 2026-02-15 03:28:52.131638 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-15 03:28:52.131682 | orchestrator | Sunday 15 February 2026 03:26:10 +0000 (0:00:00.095) 0:00:17.956 ******* 2026-02-15 03:28:52.131703 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:28:52.131724 | orchestrator | 2026-02-15 03:28:52.131744 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-15 03:28:52.131763 | orchestrator | Sunday 15 February 2026 03:26:11 +0000 (0:00:00.222) 0:00:18.179 ******* 2026-02-15 03:28:52.131782 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:28:52.131802 | orchestrator | 2026-02-15 03:28:52.131821 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-15 03:28:52.131841 | orchestrator | Sunday 15 February 2026 03:26:11 +0000 (0:00:00.673) 0:00:18.852 ******* 2026-02-15 03:28:52.131860 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:28:52.131879 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:28:52.131898 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:28:52.131917 | orchestrator | 2026-02-15 03:28:52.131936 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-15 03:28:52.131956 | orchestrator | Sunday 15 February 2026 03:27:16 +0000 (0:01:05.218) 0:01:24.071 ******* 2026-02-15 03:28:52.131975 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:28:52.131993 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:28:52.132012 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:28:52.132031 | orchestrator | 2026-02-15 03:28:52.132051 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-15 03:28:52.132070 | orchestrator | Sunday 15 February 2026 03:28:41 +0000 (0:01:24.722) 0:02:48.793 ******* 2026-02-15 03:28:52.132090 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:28:52.132109 | orchestrator | 2026-02-15 03:28:52.132128 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-15 03:28:52.132147 | orchestrator | Sunday 15 February 2026 03:28:42 +0000 (0:00:00.548) 0:02:49.342 ******* 2026-02-15 03:28:52.132167 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:28:52.132186 | orchestrator | 2026-02-15 03:28:52.132204 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-15 03:28:52.132223 | orchestrator | Sunday 15 February 2026 03:28:45 +0000 (0:00:02.843) 0:02:52.185 ******* 2026-02-15 03:28:52.132242 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:28:52.132283 | orchestrator | 2026-02-15 03:28:52.132303 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-15 03:28:52.132323 | orchestrator | Sunday 15 February 2026 03:28:47 +0000 (0:00:02.210) 0:02:54.396 ******* 2026-02-15 03:28:52.132341 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:28:52.132359 | orchestrator | 2026-02-15 03:28:52.132377 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-15 03:28:52.132395 | orchestrator | Sunday 15 February 2026 03:28:49 +0000 (0:00:02.604) 0:02:57.000 ******* 2026-02-15 03:28:52.132414 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:28:52.132431 | orchestrator | 2026-02-15 03:28:52.132450 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:28:52.132479 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 03:28:52.132514 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 03:28:52.132531 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 03:28:52.132548 | orchestrator | 2026-02-15 03:28:52.132568 | orchestrator | 2026-02-15 03:28:52.132587 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:28:52.132605 | orchestrator | Sunday 15 February 2026 03:28:52 +0000 (0:00:02.271) 0:02:59.272 ******* 2026-02-15 03:28:52.132619 | orchestrator | =============================================================================== 2026-02-15 03:28:52.132630 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 84.72s 2026-02-15 03:28:52.132641 | orchestrator | opensearch : Restart opensearch container ------------------------------ 65.22s 2026-02-15 03:28:52.132652 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.84s 2026-02-15 03:28:52.132663 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.60s 2026-02-15 03:28:52.132673 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.49s 2026-02-15 03:28:52.132684 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.37s 2026-02-15 03:28:52.132695 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.37s 2026-02-15 03:28:52.132706 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.27s 2026-02-15 03:28:52.132717 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.21s 2026-02-15 03:28:52.132727 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.84s 2026-02-15 03:28:52.132738 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.79s 2026-02-15 03:28:52.132749 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.61s 2026-02-15 03:28:52.132760 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.09s 2026-02-15 03:28:52.132771 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.90s 2026-02-15 03:28:52.132781 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.68s 2026-02-15 03:28:52.132792 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.67s 2026-02-15 03:28:52.132812 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.57s 2026-02-15 03:28:52.512767 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2026-02-15 03:28:52.512840 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2026-02-15 03:28:52.512846 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-02-15 03:28:55.042844 | orchestrator | 2026-02-15 03:28:55 | INFO  | Task 084abf4b-9f62-406f-98a4-10a030ce7310 (memcached) was prepared for execution. 2026-02-15 03:28:55.042972 | orchestrator | 2026-02-15 03:28:55 | INFO  | It takes a moment until task 084abf4b-9f62-406f-98a4-10a030ce7310 (memcached) has been started and output is visible here. 2026-02-15 03:29:07.455755 | orchestrator | 2026-02-15 03:29:07.455859 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 03:29:07.455869 | orchestrator | 2026-02-15 03:29:07.455876 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 03:29:07.455883 | orchestrator | Sunday 15 February 2026 03:28:59 +0000 (0:00:00.280) 0:00:00.280 ******* 2026-02-15 03:29:07.455889 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:29:07.455897 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:29:07.455903 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:29:07.455909 | orchestrator | 2026-02-15 03:29:07.455915 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 03:29:07.455944 | orchestrator | Sunday 15 February 2026 03:28:59 +0000 (0:00:00.297) 0:00:00.577 ******* 2026-02-15 03:29:07.455952 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-15 03:29:07.455959 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-15 03:29:07.455965 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-15 03:29:07.455971 | orchestrator | 2026-02-15 03:29:07.455978 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-15 03:29:07.455984 | orchestrator | 2026-02-15 03:29:07.455990 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-15 03:29:07.455996 | orchestrator | Sunday 15 February 2026 03:29:00 +0000 (0:00:00.463) 0:00:01.041 ******* 2026-02-15 03:29:07.456003 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:29:07.456011 | orchestrator | 2026-02-15 03:29:07.456017 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-15 03:29:07.456023 | orchestrator | Sunday 15 February 2026 03:29:00 +0000 (0:00:00.545) 0:00:01.586 ******* 2026-02-15 03:29:07.456029 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-15 03:29:07.456036 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-15 03:29:07.456042 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-15 03:29:07.456048 | orchestrator | 2026-02-15 03:29:07.456067 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-15 03:29:07.456073 | orchestrator | Sunday 15 February 2026 03:29:01 +0000 (0:00:00.653) 0:00:02.240 ******* 2026-02-15 03:29:07.456079 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-02-15 03:29:07.456086 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-02-15 03:29:07.456092 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-02-15 03:29:07.456098 | orchestrator | 2026-02-15 03:29:07.456104 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-02-15 03:29:07.456111 | orchestrator | Sunday 15 February 2026 03:29:03 +0000 (0:00:01.804) 0:00:04.044 ******* 2026-02-15 03:29:07.456128 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:29:07.456134 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:29:07.456140 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:29:07.456147 | orchestrator | 2026-02-15 03:29:07.456153 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-15 03:29:07.456159 | orchestrator | Sunday 15 February 2026 03:29:04 +0000 (0:00:01.519) 0:00:05.563 ******* 2026-02-15 03:29:07.456165 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:29:07.456172 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:29:07.456178 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:29:07.456184 | orchestrator | 2026-02-15 03:29:07.456190 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:29:07.456197 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:29:07.456204 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:29:07.456210 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:29:07.456216 | orchestrator | 2026-02-15 03:29:07.456223 | orchestrator | 2026-02-15 03:29:07.456276 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:29:07.456284 | orchestrator | Sunday 15 February 2026 03:29:06 +0000 (0:00:02.143) 0:00:07.706 ******* 2026-02-15 03:29:07.456290 | orchestrator | =============================================================================== 2026-02-15 03:29:07.456297 | orchestrator | memcached : Restart memcached container --------------------------------- 2.14s 2026-02-15 03:29:07.456303 | orchestrator | memcached : Copying over config.json files for services ----------------- 1.80s 2026-02-15 03:29:07.456314 | orchestrator | memcached : Check memcached container ----------------------------------- 1.52s 2026-02-15 03:29:07.456321 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.65s 2026-02-15 03:29:07.456329 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.55s 2026-02-15 03:29:07.456337 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.46s 2026-02-15 03:29:07.456344 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2026-02-15 03:29:09.941944 | orchestrator | 2026-02-15 03:29:09 | INFO  | Task 878e6d97-0dcc-494d-85a3-63a4061cdea7 (redis) was prepared for execution. 2026-02-15 03:29:09.942047 | orchestrator | 2026-02-15 03:29:09 | INFO  | It takes a moment until task 878e6d97-0dcc-494d-85a3-63a4061cdea7 (redis) has been started and output is visible here. 2026-02-15 03:29:19.461579 | orchestrator | 2026-02-15 03:29:19.461715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 03:29:19.461743 | orchestrator | 2026-02-15 03:29:19.461762 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 03:29:19.461780 | orchestrator | Sunday 15 February 2026 03:29:14 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-02-15 03:29:19.461795 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:29:19.461812 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:29:19.461827 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:29:19.461843 | orchestrator | 2026-02-15 03:29:19.461858 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 03:29:19.461875 | orchestrator | Sunday 15 February 2026 03:29:14 +0000 (0:00:00.343) 0:00:00.621 ******* 2026-02-15 03:29:19.461890 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-15 03:29:19.461907 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-15 03:29:19.461923 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-15 03:29:19.461941 | orchestrator | 2026-02-15 03:29:19.461958 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-15 03:29:19.461973 | orchestrator | 2026-02-15 03:29:19.461989 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-15 03:29:19.462006 | orchestrator | Sunday 15 February 2026 03:29:15 +0000 (0:00:00.476) 0:00:01.097 ******* 2026-02-15 03:29:19.462126 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:29:19.462150 | orchestrator | 2026-02-15 03:29:19.462167 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-15 03:29:19.462186 | orchestrator | Sunday 15 February 2026 03:29:15 +0000 (0:00:00.505) 0:00:01.603 ******* 2026-02-15 03:29:19.462316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:19.462352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:19.462374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:19.462425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:19.462476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:19.462497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:19.462515 | orchestrator | 2026-02-15 03:29:19.462532 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-15 03:29:19.462548 | orchestrator | Sunday 15 February 2026 03:29:16 +0000 (0:00:01.090) 0:00:02.694 ******* 2026-02-15 03:29:19.462566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:19.462626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:19.462659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:19.462674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:19.462703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.676790 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.676921 | orchestrator | 2026-02-15 03:29:23.676949 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-15 03:29:23.676972 | orchestrator | Sunday 15 February 2026 03:29:19 +0000 (0:00:02.572) 0:00:05.266 ******* 2026-02-15 03:29:23.676994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677146 | orchestrator | 2026-02-15 03:29:23.677158 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-02-15 03:29:23.677169 | orchestrator | Sunday 15 February 2026 03:29:21 +0000 (0:00:02.505) 0:00:07.772 ******* 2026-02-15 03:29:23.677181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20251130', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:23.677354 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 03:29:38.752844 | orchestrator | 2026-02-15 03:29:38.752954 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-15 03:29:38.752974 | orchestrator | Sunday 15 February 2026 03:29:23 +0000 (0:00:01.504) 0:00:09.277 ******* 2026-02-15 03:29:38.752990 | orchestrator | 2026-02-15 03:29:38.753004 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-15 03:29:38.753013 | orchestrator | Sunday 15 February 2026 03:29:23 +0000 (0:00:00.068) 0:00:09.345 ******* 2026-02-15 03:29:38.753021 | orchestrator | 2026-02-15 03:29:38.753029 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-15 03:29:38.753037 | orchestrator | Sunday 15 February 2026 03:29:23 +0000 (0:00:00.065) 0:00:09.411 ******* 2026-02-15 03:29:38.753045 | orchestrator | 2026-02-15 03:29:38.753053 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-15 03:29:38.753084 | orchestrator | Sunday 15 February 2026 03:29:23 +0000 (0:00:00.070) 0:00:09.481 ******* 2026-02-15 03:29:38.753092 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:29:38.753102 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:29:38.753110 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:29:38.753117 | orchestrator | 2026-02-15 03:29:38.753126 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-15 03:29:38.753134 | orchestrator | Sunday 15 February 2026 03:29:30 +0000 (0:00:06.649) 0:00:16.131 ******* 2026-02-15 03:29:38.753142 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:29:38.753150 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:29:38.753212 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:29:38.753223 | orchestrator | 2026-02-15 03:29:38.753231 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:29:38.753242 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:29:38.753258 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:29:38.753271 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:29:38.753284 | orchestrator | 2026-02-15 03:29:38.753296 | orchestrator | 2026-02-15 03:29:38.753307 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:29:38.753319 | orchestrator | Sunday 15 February 2026 03:29:38 +0000 (0:00:08.047) 0:00:24.178 ******* 2026-02-15 03:29:38.753331 | orchestrator | =============================================================================== 2026-02-15 03:29:38.753344 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.05s 2026-02-15 03:29:38.753356 | orchestrator | redis : Restart redis container ----------------------------------------- 6.65s 2026-02-15 03:29:38.753369 | orchestrator | redis : Copying over default config.json files -------------------------- 2.57s 2026-02-15 03:29:38.753383 | orchestrator | redis : Copying over redis config files --------------------------------- 2.51s 2026-02-15 03:29:38.753398 | orchestrator | redis : Check redis containers ------------------------------------------ 1.50s 2026-02-15 03:29:38.753413 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.09s 2026-02-15 03:29:38.753426 | orchestrator | redis : include_tasks --------------------------------------------------- 0.51s 2026-02-15 03:29:38.753440 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.48s 2026-02-15 03:29:38.753450 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.34s 2026-02-15 03:29:38.753459 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.20s 2026-02-15 03:29:41.325506 | orchestrator | 2026-02-15 03:29:41 | INFO  | Task 0fd2a67f-1d73-4d50-9515-7c296937ea3a (mariadb) was prepared for execution. 2026-02-15 03:29:41.325589 | orchestrator | 2026-02-15 03:29:41 | INFO  | It takes a moment until task 0fd2a67f-1d73-4d50-9515-7c296937ea3a (mariadb) has been started and output is visible here. 2026-02-15 03:29:56.063440 | orchestrator | 2026-02-15 03:29:56.063554 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 03:29:56.063572 | orchestrator | 2026-02-15 03:29:56.063588 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 03:29:56.063608 | orchestrator | Sunday 15 February 2026 03:29:45 +0000 (0:00:00.172) 0:00:00.172 ******* 2026-02-15 03:29:56.063627 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:29:56.063646 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:29:56.063666 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:29:56.063685 | orchestrator | 2026-02-15 03:29:56.063705 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 03:29:56.063725 | orchestrator | Sunday 15 February 2026 03:29:46 +0000 (0:00:00.330) 0:00:00.503 ******* 2026-02-15 03:29:56.063767 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-15 03:29:56.063780 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-15 03:29:56.063791 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-15 03:29:56.063802 | orchestrator | 2026-02-15 03:29:56.063813 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-15 03:29:56.063824 | orchestrator | 2026-02-15 03:29:56.063835 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-15 03:29:56.063847 | orchestrator | Sunday 15 February 2026 03:29:46 +0000 (0:00:00.639) 0:00:01.142 ******* 2026-02-15 03:29:56.063858 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 03:29:56.063869 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-15 03:29:56.063880 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-15 03:29:56.063891 | orchestrator | 2026-02-15 03:29:56.063903 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-15 03:29:56.063914 | orchestrator | Sunday 15 February 2026 03:29:47 +0000 (0:00:00.398) 0:00:01.541 ******* 2026-02-15 03:29:56.063925 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:29:56.063937 | orchestrator | 2026-02-15 03:29:56.063948 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-15 03:29:56.063960 | orchestrator | Sunday 15 February 2026 03:29:47 +0000 (0:00:00.586) 0:00:02.127 ******* 2026-02-15 03:29:56.063997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 03:29:56.064051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 03:29:56.064096 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 03:29:56.064119 | orchestrator | 2026-02-15 03:29:56.064235 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-15 03:29:56.064249 | orchestrator | Sunday 15 February 2026 03:29:50 +0000 (0:00:02.810) 0:00:04.937 ******* 2026-02-15 03:29:56.064260 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:29:56.064273 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:29:56.064284 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:29:56.064295 | orchestrator | 2026-02-15 03:29:56.064306 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-15 03:29:56.064317 | orchestrator | Sunday 15 February 2026 03:29:51 +0000 (0:00:00.693) 0:00:05.631 ******* 2026-02-15 03:29:56.064328 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:29:56.064338 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:29:56.064349 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:29:56.064367 | orchestrator | 2026-02-15 03:29:56.064385 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-15 03:29:56.064415 | orchestrator | Sunday 15 February 2026 03:29:52 +0000 (0:00:01.428) 0:00:07.059 ******* 2026-02-15 03:29:56.064448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 03:30:04.328273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 03:30:04.328355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 03:30:04.328406 | orchestrator | 2026-02-15 03:30:04.328427 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-15 03:30:04.328446 | orchestrator | Sunday 15 February 2026 03:29:56 +0000 (0:00:03.317) 0:00:10.376 ******* 2026-02-15 03:30:04.328464 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:30:04.328482 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:30:04.328499 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:30:04.328516 | orchestrator | 2026-02-15 03:30:04.328533 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-15 03:30:04.328566 | orchestrator | Sunday 15 February 2026 03:29:57 +0000 (0:00:01.073) 0:00:11.450 ******* 2026-02-15 03:30:04.328584 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:30:04.328601 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:30:04.328618 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:30:04.328635 | orchestrator | 2026-02-15 03:30:04.328652 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-15 03:30:04.328669 | orchestrator | Sunday 15 February 2026 03:30:01 +0000 (0:00:04.042) 0:00:15.492 ******* 2026-02-15 03:30:04.328686 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:30:04.328703 | orchestrator | 2026-02-15 03:30:04.328720 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-15 03:30:04.328737 | orchestrator | Sunday 15 February 2026 03:30:01 +0000 (0:00:00.562) 0:00:16.055 ******* 2026-02-15 03:30:04.328766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:30:04.328794 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:30:04.328820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:30:09.354590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:30:09.354735 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:30:09.354755 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:30:09.354767 | orchestrator | 2026-02-15 03:30:09.354779 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-15 03:30:09.354791 | orchestrator | Sunday 15 February 2026 03:30:04 +0000 (0:00:02.588) 0:00:18.643 ******* 2026-02-15 03:30:09.354804 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:30:09.354816 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:30:09.354856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:30:09.354877 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:30:09.354888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:30:09.354899 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:30:09.354908 | orchestrator | 2026-02-15 03:30:09.354919 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-15 03:30:09.354930 | orchestrator | Sunday 15 February 2026 03:30:06 +0000 (0:00:02.613) 0:00:21.257 ******* 2026-02-15 03:30:09.354954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:30:12.260884 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:30:12.261060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:30:12.261093 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:30:12.261165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 03:30:12.261221 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:30:12.261243 | orchestrator | 2026-02-15 03:30:12.261265 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-02-15 03:30:12.261286 | orchestrator | Sunday 15 February 2026 03:30:09 +0000 (0:00:02.412) 0:00:23.670 ******* 2026-02-15 03:30:12.261337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 03:30:12.261369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 03:30:12.261420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 03:32:30.337500 | orchestrator | 2026-02-15 03:32:30.337576 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-15 03:32:30.337583 | orchestrator | Sunday 15 February 2026 03:30:12 +0000 (0:00:02.904) 0:00:26.574 ******* 2026-02-15 03:32:30.337590 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:32:30.337598 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:32:30.337604 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:32:30.337610 | orchestrator | 2026-02-15 03:32:30.337617 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-15 03:32:30.337623 | orchestrator | Sunday 15 February 2026 03:30:13 +0000 (0:00:00.841) 0:00:27.416 ******* 2026-02-15 03:32:30.337629 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:30.337636 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:32:30.337642 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:32:30.337648 | orchestrator | 2026-02-15 03:32:30.337654 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-15 03:32:30.337660 | orchestrator | Sunday 15 February 2026 03:30:13 +0000 (0:00:00.577) 0:00:27.994 ******* 2026-02-15 03:32:30.337666 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:30.337673 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:32:30.337679 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:32:30.337685 | orchestrator | 2026-02-15 03:32:30.337691 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-15 03:32:30.337698 | orchestrator | Sunday 15 February 2026 03:30:14 +0000 (0:00:00.356) 0:00:28.350 ******* 2026-02-15 03:32:30.337706 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-02-15 03:32:30.337735 | orchestrator | ...ignoring 2026-02-15 03:32:30.337742 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-02-15 03:32:30.337749 | orchestrator | ...ignoring 2026-02-15 03:32:30.337755 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-02-15 03:32:30.337775 | orchestrator | ...ignoring 2026-02-15 03:32:30.337789 | orchestrator | 2026-02-15 03:32:30.337795 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-15 03:32:30.337802 | orchestrator | Sunday 15 February 2026 03:30:24 +0000 (0:00:10.888) 0:00:39.238 ******* 2026-02-15 03:32:30.337807 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:30.337810 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:32:30.337814 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:32:30.337821 | orchestrator | 2026-02-15 03:32:30.337827 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-15 03:32:30.337833 | orchestrator | Sunday 15 February 2026 03:30:25 +0000 (0:00:00.468) 0:00:39.707 ******* 2026-02-15 03:32:30.337853 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:30.337860 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:30.337867 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:30.337909 | orchestrator | 2026-02-15 03:32:30.337918 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-15 03:32:30.337926 | orchestrator | Sunday 15 February 2026 03:30:26 +0000 (0:00:00.730) 0:00:40.437 ******* 2026-02-15 03:32:30.337933 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:30.337939 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:30.337945 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:30.337951 | orchestrator | 2026-02-15 03:32:30.337957 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-15 03:32:30.337963 | orchestrator | Sunday 15 February 2026 03:30:26 +0000 (0:00:00.456) 0:00:40.893 ******* 2026-02-15 03:32:30.337969 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:30.337976 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:30.337982 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:30.337988 | orchestrator | 2026-02-15 03:32:30.337994 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-15 03:32:30.338000 | orchestrator | Sunday 15 February 2026 03:30:26 +0000 (0:00:00.432) 0:00:41.325 ******* 2026-02-15 03:32:30.338006 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:30.338062 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:32:30.338067 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:32:30.338071 | orchestrator | 2026-02-15 03:32:30.338076 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-15 03:32:30.338081 | orchestrator | Sunday 15 February 2026 03:30:27 +0000 (0:00:00.447) 0:00:41.773 ******* 2026-02-15 03:32:30.338085 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:30.338090 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:30.338094 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:30.338098 | orchestrator | 2026-02-15 03:32:30.338103 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-15 03:32:30.338107 | orchestrator | Sunday 15 February 2026 03:30:28 +0000 (0:00:00.664) 0:00:42.437 ******* 2026-02-15 03:32:30.338112 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:30.338116 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:30.338121 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-02-15 03:32:30.338125 | orchestrator | 2026-02-15 03:32:30.338129 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-02-15 03:32:30.338134 | orchestrator | Sunday 15 February 2026 03:30:28 +0000 (0:00:00.428) 0:00:42.866 ******* 2026-02-15 03:32:30.338138 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:32:30.338143 | orchestrator | 2026-02-15 03:32:30.338147 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-02-15 03:32:30.338158 | orchestrator | Sunday 15 February 2026 03:30:39 +0000 (0:00:10.617) 0:00:53.483 ******* 2026-02-15 03:32:30.338162 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:30.338166 | orchestrator | 2026-02-15 03:32:30.338170 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-15 03:32:30.338174 | orchestrator | Sunday 15 February 2026 03:30:39 +0000 (0:00:00.145) 0:00:53.629 ******* 2026-02-15 03:32:30.338178 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:30.338194 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:30.338199 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:30.338205 | orchestrator | 2026-02-15 03:32:30.338211 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-02-15 03:32:30.338215 | orchestrator | Sunday 15 February 2026 03:30:40 +0000 (0:00:01.075) 0:00:54.705 ******* 2026-02-15 03:32:30.338219 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:32:30.338222 | orchestrator | 2026-02-15 03:32:30.338226 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-02-15 03:32:30.338230 | orchestrator | Sunday 15 February 2026 03:30:48 +0000 (0:00:08.080) 0:01:02.785 ******* 2026-02-15 03:32:30.338234 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:30.338238 | orchestrator | 2026-02-15 03:32:30.338241 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-02-15 03:32:30.338245 | orchestrator | Sunday 15 February 2026 03:30:50 +0000 (0:00:01.707) 0:01:04.492 ******* 2026-02-15 03:32:30.338249 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:30.338253 | orchestrator | 2026-02-15 03:32:30.338257 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-02-15 03:32:30.338261 | orchestrator | Sunday 15 February 2026 03:30:52 +0000 (0:00:02.591) 0:01:07.084 ******* 2026-02-15 03:32:30.338264 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:32:30.338268 | orchestrator | 2026-02-15 03:32:30.338272 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-15 03:32:30.338276 | orchestrator | Sunday 15 February 2026 03:30:52 +0000 (0:00:00.131) 0:01:07.216 ******* 2026-02-15 03:32:30.338280 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:30.338283 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:30.338287 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:30.338291 | orchestrator | 2026-02-15 03:32:30.338295 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-15 03:32:30.338299 | orchestrator | Sunday 15 February 2026 03:30:53 +0000 (0:00:00.330) 0:01:07.546 ******* 2026-02-15 03:32:30.338302 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:30.338306 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-15 03:32:30.338310 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:32:30.338314 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:32:30.338318 | orchestrator | 2026-02-15 03:32:30.338321 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-15 03:32:30.338325 | orchestrator | skipping: no hosts matched 2026-02-15 03:32:30.338329 | orchestrator | 2026-02-15 03:32:30.338333 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-15 03:32:30.338337 | orchestrator | 2026-02-15 03:32:30.338340 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-15 03:32:30.338344 | orchestrator | Sunday 15 February 2026 03:30:53 +0000 (0:00:00.604) 0:01:08.150 ******* 2026-02-15 03:32:30.338348 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:32:30.338352 | orchestrator | 2026-02-15 03:32:30.338361 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-15 03:32:30.338365 | orchestrator | Sunday 15 February 2026 03:31:12 +0000 (0:00:18.835) 0:01:26.986 ******* 2026-02-15 03:32:30.338369 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:32:30.338373 | orchestrator | 2026-02-15 03:32:30.338376 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-15 03:32:30.338387 | orchestrator | Sunday 15 February 2026 03:31:29 +0000 (0:00:16.583) 0:01:43.570 ******* 2026-02-15 03:32:30.338391 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:32:30.338395 | orchestrator | 2026-02-15 03:32:30.338399 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-15 03:32:30.338402 | orchestrator | 2026-02-15 03:32:30.338406 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-15 03:32:30.338410 | orchestrator | Sunday 15 February 2026 03:31:31 +0000 (0:00:02.587) 0:01:46.157 ******* 2026-02-15 03:32:30.338414 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:32:30.338418 | orchestrator | 2026-02-15 03:32:30.338422 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-15 03:32:30.338426 | orchestrator | Sunday 15 February 2026 03:31:55 +0000 (0:00:23.650) 0:02:09.808 ******* 2026-02-15 03:32:30.338429 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:32:30.338433 | orchestrator | 2026-02-15 03:32:30.338437 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-15 03:32:30.338441 | orchestrator | Sunday 15 February 2026 03:32:07 +0000 (0:00:11.588) 0:02:21.397 ******* 2026-02-15 03:32:30.338445 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:32:30.338448 | orchestrator | 2026-02-15 03:32:30.338452 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-15 03:32:30.338456 | orchestrator | 2026-02-15 03:32:30.338460 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-15 03:32:30.338463 | orchestrator | Sunday 15 February 2026 03:32:09 +0000 (0:00:02.571) 0:02:23.968 ******* 2026-02-15 03:32:30.338467 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:32:30.338471 | orchestrator | 2026-02-15 03:32:30.338475 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-15 03:32:30.338479 | orchestrator | Sunday 15 February 2026 03:32:22 +0000 (0:00:12.538) 0:02:36.506 ******* 2026-02-15 03:32:30.338482 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:30.338486 | orchestrator | 2026-02-15 03:32:30.338490 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-15 03:32:30.338494 | orchestrator | Sunday 15 February 2026 03:32:26 +0000 (0:00:04.623) 0:02:41.129 ******* 2026-02-15 03:32:30.338498 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:30.338501 | orchestrator | 2026-02-15 03:32:30.338505 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-15 03:32:30.338509 | orchestrator | 2026-02-15 03:32:30.338513 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-15 03:32:30.338517 | orchestrator | Sunday 15 February 2026 03:32:29 +0000 (0:00:02.854) 0:02:43.984 ******* 2026-02-15 03:32:30.338521 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:32:30.338524 | orchestrator | 2026-02-15 03:32:30.338528 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-15 03:32:30.338536 | orchestrator | Sunday 15 February 2026 03:32:30 +0000 (0:00:00.663) 0:02:44.648 ******* 2026-02-15 03:32:43.075550 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:43.075689 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:43.075707 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:32:43.075719 | orchestrator | 2026-02-15 03:32:43.075732 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-15 03:32:43.075744 | orchestrator | Sunday 15 February 2026 03:32:32 +0000 (0:00:02.271) 0:02:46.919 ******* 2026-02-15 03:32:43.075756 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:43.075767 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:43.075778 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:32:43.075789 | orchestrator | 2026-02-15 03:32:43.075800 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-15 03:32:43.075811 | orchestrator | Sunday 15 February 2026 03:32:34 +0000 (0:00:02.114) 0:02:49.034 ******* 2026-02-15 03:32:43.075822 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:43.075833 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:43.075934 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:32:43.075948 | orchestrator | 2026-02-15 03:32:43.075960 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-15 03:32:43.075971 | orchestrator | Sunday 15 February 2026 03:32:37 +0000 (0:00:02.340) 0:02:51.374 ******* 2026-02-15 03:32:43.076006 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:43.076019 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:43.076030 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:32:43.076040 | orchestrator | 2026-02-15 03:32:43.076052 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-15 03:32:43.076063 | orchestrator | Sunday 15 February 2026 03:32:39 +0000 (0:00:02.095) 0:02:53.470 ******* 2026-02-15 03:32:43.076074 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:43.076086 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:32:43.076098 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:32:43.076108 | orchestrator | 2026-02-15 03:32:43.076120 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-15 03:32:43.076131 | orchestrator | Sunday 15 February 2026 03:32:42 +0000 (0:00:03.040) 0:02:56.510 ******* 2026-02-15 03:32:43.076142 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:43.076153 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:32:43.076164 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:32:43.076175 | orchestrator | 2026-02-15 03:32:43.076186 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:32:43.076199 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-02-15 03:32:43.076212 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-15 03:32:43.076239 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-02-15 03:32:43.076254 | orchestrator | 2026-02-15 03:32:43.076273 | orchestrator | 2026-02-15 03:32:43.076293 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:32:43.076321 | orchestrator | Sunday 15 February 2026 03:32:42 +0000 (0:00:00.468) 0:02:56.979 ******* 2026-02-15 03:32:43.076343 | orchestrator | =============================================================================== 2026-02-15 03:32:43.076360 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 42.49s 2026-02-15 03:32:43.076379 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 28.17s 2026-02-15 03:32:43.076396 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.54s 2026-02-15 03:32:43.076413 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2026-02-15 03:32:43.076434 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.62s 2026-02-15 03:32:43.076453 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.08s 2026-02-15 03:32:43.076471 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.16s 2026-02-15 03:32:43.076490 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.62s 2026-02-15 03:32:43.076510 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.04s 2026-02-15 03:32:43.076527 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.32s 2026-02-15 03:32:43.076544 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.04s 2026-02-15 03:32:43.076562 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 2.90s 2026-02-15 03:32:43.076580 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.85s 2026-02-15 03:32:43.076598 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.81s 2026-02-15 03:32:43.076617 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.61s 2026-02-15 03:32:43.076652 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.59s 2026-02-15 03:32:43.076670 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.59s 2026-02-15 03:32:43.076688 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.41s 2026-02-15 03:32:43.076706 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.34s 2026-02-15 03:32:43.076725 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.27s 2026-02-15 03:32:45.779657 | orchestrator | 2026-02-15 03:32:45 | INFO  | Task 2d7f82ae-587e-4ee0-ac5e-2659a54adf19 (rabbitmq) was prepared for execution. 2026-02-15 03:32:45.779743 | orchestrator | 2026-02-15 03:32:45 | INFO  | It takes a moment until task 2d7f82ae-587e-4ee0-ac5e-2659a54adf19 (rabbitmq) has been started and output is visible here. 2026-02-15 03:32:59.826609 | orchestrator | 2026-02-15 03:32:59.826714 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 03:32:59.826728 | orchestrator | 2026-02-15 03:32:59.826737 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 03:32:59.826745 | orchestrator | Sunday 15 February 2026 03:32:50 +0000 (0:00:00.193) 0:00:00.193 ******* 2026-02-15 03:32:59.826753 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:59.826761 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:32:59.826769 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:32:59.826776 | orchestrator | 2026-02-15 03:32:59.826784 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 03:32:59.826791 | orchestrator | Sunday 15 February 2026 03:32:50 +0000 (0:00:00.359) 0:00:00.552 ******* 2026-02-15 03:32:59.826798 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-15 03:32:59.826806 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-15 03:32:59.826814 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-15 03:32:59.826862 | orchestrator | 2026-02-15 03:32:59.826871 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-15 03:32:59.826878 | orchestrator | 2026-02-15 03:32:59.826886 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-15 03:32:59.826893 | orchestrator | Sunday 15 February 2026 03:32:51 +0000 (0:00:00.701) 0:00:01.254 ******* 2026-02-15 03:32:59.826901 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:32:59.826909 | orchestrator | 2026-02-15 03:32:59.826917 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-15 03:32:59.826924 | orchestrator | Sunday 15 February 2026 03:32:51 +0000 (0:00:00.568) 0:00:01.823 ******* 2026-02-15 03:32:59.826931 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:59.826938 | orchestrator | 2026-02-15 03:32:59.826945 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-15 03:32:59.826953 | orchestrator | Sunday 15 February 2026 03:32:52 +0000 (0:00:00.984) 0:00:02.807 ******* 2026-02-15 03:32:59.826960 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:59.826968 | orchestrator | 2026-02-15 03:32:59.826975 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-15 03:32:59.826982 | orchestrator | Sunday 15 February 2026 03:32:53 +0000 (0:00:00.437) 0:00:03.244 ******* 2026-02-15 03:32:59.826990 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:59.826997 | orchestrator | 2026-02-15 03:32:59.827004 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-15 03:32:59.827025 | orchestrator | Sunday 15 February 2026 03:32:53 +0000 (0:00:00.406) 0:00:03.651 ******* 2026-02-15 03:32:59.827033 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:59.827040 | orchestrator | 2026-02-15 03:32:59.827047 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-15 03:32:59.827055 | orchestrator | Sunday 15 February 2026 03:32:54 +0000 (0:00:00.379) 0:00:04.031 ******* 2026-02-15 03:32:59.827082 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:59.827090 | orchestrator | 2026-02-15 03:32:59.827097 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-15 03:32:59.827104 | orchestrator | Sunday 15 February 2026 03:32:54 +0000 (0:00:00.620) 0:00:04.651 ******* 2026-02-15 03:32:59.827112 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:32:59.827119 | orchestrator | 2026-02-15 03:32:59.827126 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-15 03:32:59.827134 | orchestrator | Sunday 15 February 2026 03:32:55 +0000 (0:00:00.896) 0:00:05.548 ******* 2026-02-15 03:32:59.827141 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:32:59.827148 | orchestrator | 2026-02-15 03:32:59.827155 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-15 03:32:59.827162 | orchestrator | Sunday 15 February 2026 03:32:56 +0000 (0:00:00.823) 0:00:06.372 ******* 2026-02-15 03:32:59.827170 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:59.827177 | orchestrator | 2026-02-15 03:32:59.827184 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-15 03:32:59.827191 | orchestrator | Sunday 15 February 2026 03:32:56 +0000 (0:00:00.393) 0:00:06.765 ******* 2026-02-15 03:32:59.827199 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:32:59.827206 | orchestrator | 2026-02-15 03:32:59.827213 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-15 03:32:59.827220 | orchestrator | Sunday 15 February 2026 03:32:57 +0000 (0:00:00.406) 0:00:07.172 ******* 2026-02-15 03:32:59.827254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:32:59.827277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:32:59.827299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:32:59.827321 | orchestrator | 2026-02-15 03:32:59.827334 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-15 03:32:59.827345 | orchestrator | Sunday 15 February 2026 03:32:58 +0000 (0:00:00.836) 0:00:08.009 ******* 2026-02-15 03:32:59.827358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:32:59.827383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:33:18.765086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:33:18.765217 | orchestrator | 2026-02-15 03:33:18.765235 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-15 03:33:18.765247 | orchestrator | Sunday 15 February 2026 03:32:59 +0000 (0:00:01.801) 0:00:09.811 ******* 2026-02-15 03:33:18.765257 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-15 03:33:18.765282 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-15 03:33:18.765293 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-15 03:33:18.765303 | orchestrator | 2026-02-15 03:33:18.765313 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-15 03:33:18.765322 | orchestrator | Sunday 15 February 2026 03:33:01 +0000 (0:00:01.508) 0:00:11.319 ******* 2026-02-15 03:33:18.765332 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-15 03:33:18.765342 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-15 03:33:18.765352 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-15 03:33:18.765361 | orchestrator | 2026-02-15 03:33:18.765371 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-15 03:33:18.765381 | orchestrator | Sunday 15 February 2026 03:33:03 +0000 (0:00:01.704) 0:00:13.024 ******* 2026-02-15 03:33:18.765390 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-15 03:33:18.765400 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-15 03:33:18.765410 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-15 03:33:18.765419 | orchestrator | 2026-02-15 03:33:18.765429 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-15 03:33:18.765439 | orchestrator | Sunday 15 February 2026 03:33:04 +0000 (0:00:01.351) 0:00:14.376 ******* 2026-02-15 03:33:18.765448 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-15 03:33:18.765458 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-15 03:33:18.765468 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-15 03:33:18.765478 | orchestrator | 2026-02-15 03:33:18.765487 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-15 03:33:18.765497 | orchestrator | Sunday 15 February 2026 03:33:06 +0000 (0:00:01.777) 0:00:16.154 ******* 2026-02-15 03:33:18.765507 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-15 03:33:18.765517 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-15 03:33:18.765527 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-15 03:33:18.765537 | orchestrator | 2026-02-15 03:33:18.765547 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-15 03:33:18.765556 | orchestrator | Sunday 15 February 2026 03:33:07 +0000 (0:00:01.440) 0:00:17.595 ******* 2026-02-15 03:33:18.765566 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-15 03:33:18.765576 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-15 03:33:18.765586 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-15 03:33:18.765603 | orchestrator | 2026-02-15 03:33:18.765614 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-15 03:33:18.765623 | orchestrator | Sunday 15 February 2026 03:33:08 +0000 (0:00:01.368) 0:00:18.964 ******* 2026-02-15 03:33:18.765633 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:33:18.765644 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:33:18.765669 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:33:18.765680 | orchestrator | 2026-02-15 03:33:18.765690 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-02-15 03:33:18.765699 | orchestrator | Sunday 15 February 2026 03:33:09 +0000 (0:00:00.505) 0:00:19.469 ******* 2026-02-15 03:33:18.765710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:33:18.765727 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:33:18.765739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 03:33:18.765756 | orchestrator | 2026-02-15 03:33:18.765766 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-02-15 03:33:18.765776 | orchestrator | Sunday 15 February 2026 03:33:10 +0000 (0:00:01.238) 0:00:20.707 ******* 2026-02-15 03:33:18.765786 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:33:18.765825 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:33:18.765835 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:33:18.765845 | orchestrator | 2026-02-15 03:33:18.765855 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-02-15 03:33:18.765865 | orchestrator | Sunday 15 February 2026 03:33:11 +0000 (0:00:00.850) 0:00:21.558 ******* 2026-02-15 03:33:18.765874 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:33:18.765884 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:33:18.765894 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:33:18.765903 | orchestrator | 2026-02-15 03:33:18.765913 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-15 03:33:18.765929 | orchestrator | Sunday 15 February 2026 03:33:18 +0000 (0:00:07.188) 0:00:28.746 ******* 2026-02-15 03:34:49.926664 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:34:49.926856 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:34:49.926871 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:34:49.926884 | orchestrator | 2026-02-15 03:34:49.926897 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-15 03:34:49.926909 | orchestrator | 2026-02-15 03:34:49.926920 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-15 03:34:49.926932 | orchestrator | Sunday 15 February 2026 03:33:19 +0000 (0:00:00.534) 0:00:29.281 ******* 2026-02-15 03:34:49.926943 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:34:49.926955 | orchestrator | 2026-02-15 03:34:49.926966 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-15 03:34:49.926977 | orchestrator | Sunday 15 February 2026 03:33:19 +0000 (0:00:00.621) 0:00:29.903 ******* 2026-02-15 03:34:49.926988 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:34:49.926999 | orchestrator | 2026-02-15 03:34:49.927023 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-15 03:34:49.927043 | orchestrator | Sunday 15 February 2026 03:33:20 +0000 (0:00:00.248) 0:00:30.151 ******* 2026-02-15 03:34:49.927055 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:34:49.927066 | orchestrator | 2026-02-15 03:34:49.927078 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-15 03:34:49.927090 | orchestrator | Sunday 15 February 2026 03:33:21 +0000 (0:00:01.602) 0:00:31.753 ******* 2026-02-15 03:34:49.927100 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:34:49.927112 | orchestrator | 2026-02-15 03:34:49.927123 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-15 03:34:49.927134 | orchestrator | 2026-02-15 03:34:49.927145 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-15 03:34:49.927156 | orchestrator | Sunday 15 February 2026 03:34:16 +0000 (0:00:54.499) 0:01:26.252 ******* 2026-02-15 03:34:49.927167 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:34:49.927178 | orchestrator | 2026-02-15 03:34:49.927206 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-15 03:34:49.927220 | orchestrator | Sunday 15 February 2026 03:34:16 +0000 (0:00:00.643) 0:01:26.896 ******* 2026-02-15 03:34:49.927232 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:34:49.927245 | orchestrator | 2026-02-15 03:34:49.927257 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-15 03:34:49.927269 | orchestrator | Sunday 15 February 2026 03:34:17 +0000 (0:00:00.232) 0:01:27.129 ******* 2026-02-15 03:34:49.927281 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:34:49.927294 | orchestrator | 2026-02-15 03:34:49.927307 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-15 03:34:49.927320 | orchestrator | Sunday 15 February 2026 03:34:18 +0000 (0:00:01.526) 0:01:28.656 ******* 2026-02-15 03:34:49.927333 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:34:49.927367 | orchestrator | 2026-02-15 03:34:49.927379 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-15 03:34:49.927390 | orchestrator | 2026-02-15 03:34:49.927401 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-15 03:34:49.927412 | orchestrator | Sunday 15 February 2026 03:34:31 +0000 (0:00:12.406) 0:01:41.062 ******* 2026-02-15 03:34:49.927423 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:34:49.927434 | orchestrator | 2026-02-15 03:34:49.927445 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-15 03:34:49.927456 | orchestrator | Sunday 15 February 2026 03:34:31 +0000 (0:00:00.799) 0:01:41.862 ******* 2026-02-15 03:34:49.927467 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:34:49.927478 | orchestrator | 2026-02-15 03:34:49.927489 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-15 03:34:49.927500 | orchestrator | Sunday 15 February 2026 03:34:32 +0000 (0:00:00.247) 0:01:42.110 ******* 2026-02-15 03:34:49.927511 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:34:49.927522 | orchestrator | 2026-02-15 03:34:49.927533 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-15 03:34:49.927544 | orchestrator | Sunday 15 February 2026 03:34:33 +0000 (0:00:01.537) 0:01:43.647 ******* 2026-02-15 03:34:49.927555 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:34:49.927566 | orchestrator | 2026-02-15 03:34:49.927577 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-15 03:34:49.927587 | orchestrator | 2026-02-15 03:34:49.927598 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-15 03:34:49.927609 | orchestrator | Sunday 15 February 2026 03:34:47 +0000 (0:00:13.464) 0:01:57.112 ******* 2026-02-15 03:34:49.927620 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:34:49.927631 | orchestrator | 2026-02-15 03:34:49.927642 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-15 03:34:49.927653 | orchestrator | Sunday 15 February 2026 03:34:47 +0000 (0:00:00.504) 0:01:57.616 ******* 2026-02-15 03:34:49.927663 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-15 03:34:49.927674 | orchestrator | enable_outward_rabbitmq_True 2026-02-15 03:34:49.927709 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-15 03:34:49.927721 | orchestrator | outward_rabbitmq_restart 2026-02-15 03:34:49.927732 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:34:49.927742 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:34:49.927753 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:34:49.927764 | orchestrator | 2026-02-15 03:34:49.927775 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-02-15 03:34:49.927786 | orchestrator | skipping: no hosts matched 2026-02-15 03:34:49.927797 | orchestrator | 2026-02-15 03:34:49.927808 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-02-15 03:34:49.927819 | orchestrator | skipping: no hosts matched 2026-02-15 03:34:49.927830 | orchestrator | 2026-02-15 03:34:49.927841 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-02-15 03:34:49.927852 | orchestrator | skipping: no hosts matched 2026-02-15 03:34:49.927862 | orchestrator | 2026-02-15 03:34:49.927873 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:34:49.927902 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-15 03:34:49.927916 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:34:49.927927 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:34:49.927938 | orchestrator | 2026-02-15 03:34:49.927949 | orchestrator | 2026-02-15 03:34:49.927969 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:34:49.927980 | orchestrator | Sunday 15 February 2026 03:34:49 +0000 (0:00:01.942) 0:01:59.559 ******* 2026-02-15 03:34:49.927991 | orchestrator | =============================================================================== 2026-02-15 03:34:49.928002 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.37s 2026-02-15 03:34:49.928013 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.19s 2026-02-15 03:34:49.928028 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.67s 2026-02-15 03:34:49.928048 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.07s 2026-02-15 03:34:49.928067 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 1.94s 2026-02-15 03:34:49.928084 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.80s 2026-02-15 03:34:49.928102 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.78s 2026-02-15 03:34:49.928120 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 1.71s 2026-02-15 03:34:49.928146 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.51s 2026-02-15 03:34:49.928163 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.44s 2026-02-15 03:34:49.928180 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.37s 2026-02-15 03:34:49.928196 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.35s 2026-02-15 03:34:49.928211 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.24s 2026-02-15 03:34:49.928227 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.98s 2026-02-15 03:34:49.928242 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 0.90s 2026-02-15 03:34:49.928259 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.85s 2026-02-15 03:34:49.928278 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 0.84s 2026-02-15 03:34:49.928296 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.82s 2026-02-15 03:34:49.928314 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 0.73s 2026-02-15 03:34:49.928332 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.70s 2026-02-15 03:34:52.480015 | orchestrator | 2026-02-15 03:34:52 | INFO  | Task 4575a670-6372-4616-a745-0f37a719a76b (openvswitch) was prepared for execution. 2026-02-15 03:34:52.480344 | orchestrator | 2026-02-15 03:34:52 | INFO  | It takes a moment until task 4575a670-6372-4616-a745-0f37a719a76b (openvswitch) has been started and output is visible here. 2026-02-15 03:35:05.948592 | orchestrator | 2026-02-15 03:35:05.948777 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 03:35:05.948797 | orchestrator | 2026-02-15 03:35:05.948809 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 03:35:05.948821 | orchestrator | Sunday 15 February 2026 03:34:57 +0000 (0:00:00.297) 0:00:00.297 ******* 2026-02-15 03:35:05.948833 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:35:05.948844 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:35:05.948856 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:35:05.948867 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:35:05.948878 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:35:05.948889 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:35:05.948900 | orchestrator | 2026-02-15 03:35:05.948911 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 03:35:05.948922 | orchestrator | Sunday 15 February 2026 03:34:57 +0000 (0:00:00.737) 0:00:01.034 ******* 2026-02-15 03:35:05.948933 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 03:35:05.948945 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 03:35:05.948983 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 03:35:05.948995 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 03:35:05.949006 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 03:35:05.949018 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 03:35:05.949030 | orchestrator | 2026-02-15 03:35:05.949041 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-15 03:35:05.949053 | orchestrator | 2026-02-15 03:35:05.949064 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-15 03:35:05.949075 | orchestrator | Sunday 15 February 2026 03:34:58 +0000 (0:00:00.628) 0:00:01.663 ******* 2026-02-15 03:35:05.949087 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:35:05.949099 | orchestrator | 2026-02-15 03:35:05.949110 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-15 03:35:05.949122 | orchestrator | Sunday 15 February 2026 03:34:59 +0000 (0:00:01.218) 0:00:02.881 ******* 2026-02-15 03:35:05.949135 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-15 03:35:05.949148 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-15 03:35:05.949161 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-15 03:35:05.949174 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-15 03:35:05.949187 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-15 03:35:05.949200 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-15 03:35:05.949213 | orchestrator | 2026-02-15 03:35:05.949227 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-15 03:35:05.949240 | orchestrator | Sunday 15 February 2026 03:35:00 +0000 (0:00:01.257) 0:00:04.138 ******* 2026-02-15 03:35:05.949253 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-02-15 03:35:05.949265 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-02-15 03:35:05.949278 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-02-15 03:35:05.949290 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-02-15 03:35:05.949304 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-02-15 03:35:05.949316 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-02-15 03:35:05.949330 | orchestrator | 2026-02-15 03:35:05.949343 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-15 03:35:05.949355 | orchestrator | Sunday 15 February 2026 03:35:02 +0000 (0:00:01.667) 0:00:05.806 ******* 2026-02-15 03:35:05.949368 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-15 03:35:05.949381 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:35:05.949396 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-15 03:35:05.949409 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:35:05.949422 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-15 03:35:05.949435 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:35:05.949448 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-15 03:35:05.949460 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:35:05.949473 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-15 03:35:05.949486 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:35:05.949500 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-15 03:35:05.949511 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:35:05.949522 | orchestrator | 2026-02-15 03:35:05.949534 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-15 03:35:05.949545 | orchestrator | Sunday 15 February 2026 03:35:03 +0000 (0:00:01.243) 0:00:07.049 ******* 2026-02-15 03:35:05.949564 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:35:05.949575 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:35:05.949586 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:35:05.949597 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:35:05.949608 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:35:05.949619 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:35:05.949630 | orchestrator | 2026-02-15 03:35:05.949641 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-15 03:35:05.949653 | orchestrator | Sunday 15 February 2026 03:35:04 +0000 (0:00:00.797) 0:00:07.846 ******* 2026-02-15 03:35:05.949746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:05.949764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:05.949777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:05.949830 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:05.949849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:05.949878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550450 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550463 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550528 | orchestrator | 2026-02-15 03:35:08.550541 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-15 03:35:08.550554 | orchestrator | Sunday 15 February 2026 03:35:06 +0000 (0:00:01.451) 0:00:09.298 ******* 2026-02-15 03:35:08.550565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550605 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550624 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:08.550646 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386112 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386160 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386194 | orchestrator | 2026-02-15 03:35:11.386202 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-15 03:35:11.386209 | orchestrator | Sunday 15 February 2026 03:35:08 +0000 (0:00:02.607) 0:00:11.905 ******* 2026-02-15 03:35:11.386215 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:35:11.386223 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:35:11.386229 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:35:11.386236 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:35:11.386243 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:35:11.386249 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:35:11.386255 | orchestrator | 2026-02-15 03:35:11.386262 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-02-15 03:35:11.386268 | orchestrator | Sunday 15 February 2026 03:35:09 +0000 (0:00:01.027) 0:00:12.932 ******* 2026-02-15 03:35:11.386274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386300 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386308 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:11.386322 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:37.237026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 03:35:37.237154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:37.237204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:37.237214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:37.237223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:37.237247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:37.237256 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 03:35:37.237272 | orchestrator | 2026-02-15 03:35:37.237282 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 03:35:37.237292 | orchestrator | Sunday 15 February 2026 03:35:11 +0000 (0:00:01.797) 0:00:14.730 ******* 2026-02-15 03:35:37.237301 | orchestrator | 2026-02-15 03:35:37.237309 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 03:35:37.237317 | orchestrator | Sunday 15 February 2026 03:35:11 +0000 (0:00:00.342) 0:00:15.073 ******* 2026-02-15 03:35:37.237325 | orchestrator | 2026-02-15 03:35:37.237333 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 03:35:37.237341 | orchestrator | Sunday 15 February 2026 03:35:11 +0000 (0:00:00.148) 0:00:15.221 ******* 2026-02-15 03:35:37.237349 | orchestrator | 2026-02-15 03:35:37.237357 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 03:35:37.237365 | orchestrator | Sunday 15 February 2026 03:35:12 +0000 (0:00:00.167) 0:00:15.389 ******* 2026-02-15 03:35:37.237373 | orchestrator | 2026-02-15 03:35:37.237381 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 03:35:37.237392 | orchestrator | Sunday 15 February 2026 03:35:12 +0000 (0:00:00.148) 0:00:15.537 ******* 2026-02-15 03:35:37.237405 | orchestrator | 2026-02-15 03:35:37.237418 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 03:35:37.237431 | orchestrator | Sunday 15 February 2026 03:35:12 +0000 (0:00:00.134) 0:00:15.672 ******* 2026-02-15 03:35:37.237443 | orchestrator | 2026-02-15 03:35:37.237455 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-15 03:35:37.237475 | orchestrator | Sunday 15 February 2026 03:35:12 +0000 (0:00:00.136) 0:00:15.808 ******* 2026-02-15 03:35:37.237490 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:35:37.237505 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:35:37.237518 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:35:37.237532 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:35:37.237542 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:35:37.237552 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:35:37.237566 | orchestrator | 2026-02-15 03:35:37.237580 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-15 03:35:37.237595 | orchestrator | Sunday 15 February 2026 03:35:21 +0000 (0:00:08.812) 0:00:24.621 ******* 2026-02-15 03:35:37.237608 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:35:37.237623 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:35:37.237663 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:35:37.237678 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:35:37.237691 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:35:37.237708 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:35:37.237722 | orchestrator | 2026-02-15 03:35:37.237732 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-15 03:35:37.237741 | orchestrator | Sunday 15 February 2026 03:35:22 +0000 (0:00:01.135) 0:00:25.757 ******* 2026-02-15 03:35:37.237755 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:35:37.237768 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:35:37.237781 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:35:37.237794 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:35:37.237808 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:35:37.237823 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:35:37.237836 | orchestrator | 2026-02-15 03:35:37.237850 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-15 03:35:37.237863 | orchestrator | Sunday 15 February 2026 03:35:30 +0000 (0:00:08.020) 0:00:33.778 ******* 2026-02-15 03:35:37.237877 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-15 03:35:37.237892 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-15 03:35:37.237901 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-15 03:35:37.237918 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-15 03:35:37.237926 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-15 03:35:37.237934 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-15 03:35:37.237944 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-15 03:35:37.237968 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-15 03:35:50.326987 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-15 03:35:50.327117 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-15 03:35:50.327139 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-15 03:35:50.327154 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-15 03:35:50.327169 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 03:35:50.327183 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 03:35:50.327197 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 03:35:50.327211 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 03:35:50.327226 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 03:35:50.327241 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 03:35:50.327256 | orchestrator | 2026-02-15 03:35:50.327272 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-15 03:35:50.327286 | orchestrator | Sunday 15 February 2026 03:35:37 +0000 (0:00:06.708) 0:00:40.486 ******* 2026-02-15 03:35:50.327300 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-15 03:35:50.327314 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:35:50.327329 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-15 03:35:50.327343 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:35:50.327358 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-15 03:35:50.327372 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:35:50.327385 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-02-15 03:35:50.327400 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-02-15 03:35:50.327413 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-02-15 03:35:50.327427 | orchestrator | 2026-02-15 03:35:50.327440 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-15 03:35:50.327454 | orchestrator | Sunday 15 February 2026 03:35:39 +0000 (0:00:02.388) 0:00:42.875 ******* 2026-02-15 03:35:50.327487 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-15 03:35:50.327503 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:35:50.327516 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-15 03:35:50.327530 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:35:50.327544 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-15 03:35:50.327558 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:35:50.327572 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-15 03:35:50.327586 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-15 03:35:50.327661 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-15 03:35:50.327679 | orchestrator | 2026-02-15 03:35:50.327692 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-15 03:35:50.327705 | orchestrator | Sunday 15 February 2026 03:35:42 +0000 (0:00:03.103) 0:00:45.979 ******* 2026-02-15 03:35:50.327718 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:35:50.327731 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:35:50.327745 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:35:50.327759 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:35:50.327773 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:35:50.327787 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:35:50.327800 | orchestrator | 2026-02-15 03:35:50.327814 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:35:50.327830 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 03:35:50.327846 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 03:35:50.327859 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 03:35:50.327872 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 03:35:50.327885 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 03:35:50.327899 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 03:35:50.327912 | orchestrator | 2026-02-15 03:35:50.327926 | orchestrator | 2026-02-15 03:35:50.327940 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:35:50.327953 | orchestrator | Sunday 15 February 2026 03:35:49 +0000 (0:00:07.151) 0:00:53.130 ******* 2026-02-15 03:35:50.327993 | orchestrator | =============================================================================== 2026-02-15 03:35:50.328009 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 15.17s 2026-02-15 03:35:50.328024 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 8.81s 2026-02-15 03:35:50.328036 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.71s 2026-02-15 03:35:50.328047 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.10s 2026-02-15 03:35:50.328059 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.61s 2026-02-15 03:35:50.328071 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.39s 2026-02-15 03:35:50.328083 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 1.80s 2026-02-15 03:35:50.328096 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.67s 2026-02-15 03:35:50.328107 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.45s 2026-02-15 03:35:50.328119 | orchestrator | module-load : Load modules ---------------------------------------------- 1.26s 2026-02-15 03:35:50.328131 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.24s 2026-02-15 03:35:50.328142 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.22s 2026-02-15 03:35:50.328154 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.14s 2026-02-15 03:35:50.328166 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.08s 2026-02-15 03:35:50.328177 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.03s 2026-02-15 03:35:50.328203 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.80s 2026-02-15 03:35:50.328215 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2026-02-15 03:35:50.328227 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2026-02-15 03:35:52.919964 | orchestrator | 2026-02-15 03:35:52 | INFO  | Task 6e51fb6e-3b56-45a2-bdfb-276d246f1486 (ovn) was prepared for execution. 2026-02-15 03:35:52.920048 | orchestrator | 2026-02-15 03:35:52 | INFO  | It takes a moment until task 6e51fb6e-3b56-45a2-bdfb-276d246f1486 (ovn) has been started and output is visible here. 2026-02-15 03:36:04.484024 | orchestrator | 2026-02-15 03:36:04.484152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 03:36:04.484169 | orchestrator | 2026-02-15 03:36:04.484199 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 03:36:04.484211 | orchestrator | Sunday 15 February 2026 03:35:57 +0000 (0:00:00.202) 0:00:00.202 ******* 2026-02-15 03:36:04.484223 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:36:04.484284 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:36:04.484297 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:36:04.484310 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:36:04.484321 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:36:04.484332 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:36:04.484344 | orchestrator | 2026-02-15 03:36:04.484355 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 03:36:04.484367 | orchestrator | Sunday 15 February 2026 03:35:58 +0000 (0:00:00.746) 0:00:00.948 ******* 2026-02-15 03:36:04.484378 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-15 03:36:04.484389 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-15 03:36:04.484401 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-15 03:36:04.484412 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-15 03:36:04.484424 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-15 03:36:04.484435 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-15 03:36:04.484446 | orchestrator | 2026-02-15 03:36:04.484457 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-15 03:36:04.484468 | orchestrator | 2026-02-15 03:36:04.484479 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-15 03:36:04.484491 | orchestrator | Sunday 15 February 2026 03:35:59 +0000 (0:00:00.908) 0:00:01.857 ******* 2026-02-15 03:36:04.484502 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:36:04.484515 | orchestrator | 2026-02-15 03:36:04.484526 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-15 03:36:04.484537 | orchestrator | Sunday 15 February 2026 03:36:00 +0000 (0:00:01.187) 0:00:03.044 ******* 2026-02-15 03:36:04.484551 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484568 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484705 | orchestrator | 2026-02-15 03:36:04.484724 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-15 03:36:04.484738 | orchestrator | Sunday 15 February 2026 03:36:01 +0000 (0:00:01.328) 0:00:04.373 ******* 2026-02-15 03:36:04.484751 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484765 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484778 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484839 | orchestrator | 2026-02-15 03:36:04.484852 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-15 03:36:04.484866 | orchestrator | Sunday 15 February 2026 03:36:03 +0000 (0:00:01.619) 0:00:05.992 ******* 2026-02-15 03:36:04.484879 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484893 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:04.484919 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461710 | orchestrator | 2026-02-15 03:36:29.461718 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-15 03:36:29.461725 | orchestrator | Sunday 15 February 2026 03:36:04 +0000 (0:00:01.211) 0:00:07.203 ******* 2026-02-15 03:36:29.461749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461756 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461769 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461812 | orchestrator | 2026-02-15 03:36:29.461818 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-02-15 03:36:29.461825 | orchestrator | Sunday 15 February 2026 03:36:05 +0000 (0:00:01.507) 0:00:08.710 ******* 2026-02-15 03:36:29.461831 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461843 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:36:29.461888 | orchestrator | 2026-02-15 03:36:29.461898 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-15 03:36:29.461918 | orchestrator | Sunday 15 February 2026 03:36:07 +0000 (0:00:01.406) 0:00:10.117 ******* 2026-02-15 03:36:29.461937 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:36:29.461947 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:36:29.461957 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:36:29.461967 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:36:29.461977 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:36:29.461986 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:36:29.461995 | orchestrator | 2026-02-15 03:36:29.462005 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-15 03:36:29.462069 | orchestrator | Sunday 15 February 2026 03:36:09 +0000 (0:00:02.594) 0:00:12.712 ******* 2026-02-15 03:36:29.462084 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-15 03:36:29.462097 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-15 03:36:29.462106 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-15 03:36:29.462115 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-15 03:36:29.462124 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-15 03:36:29.462141 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-15 03:36:29.462159 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 03:37:02.641556 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 03:37:02.641745 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 03:37:02.641759 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 03:37:02.641770 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 03:37:02.641803 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 03:37:02.641814 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-15 03:37:02.641826 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-15 03:37:02.641836 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-15 03:37:02.641846 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-15 03:37:02.641856 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-15 03:37:02.641866 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-02-15 03:37:02.641876 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 03:37:02.641888 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 03:37:02.641898 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 03:37:02.641908 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 03:37:02.641918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 03:37:02.641928 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 03:37:02.641937 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 03:37:02.641947 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 03:37:02.641957 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 03:37:02.641966 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 03:37:02.641976 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 03:37:02.641986 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 03:37:02.641996 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 03:37:02.642006 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 03:37:02.642059 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 03:37:02.642071 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 03:37:02.642082 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 03:37:02.642092 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 03:37:02.642104 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-15 03:37:02.642117 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-15 03:37:02.642128 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-15 03:37:02.642139 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-15 03:37:02.642159 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-15 03:37:02.642181 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-15 03:37:02.642207 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-15 03:37:02.642235 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-15 03:37:02.642246 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-15 03:37:02.642256 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-15 03:37:02.642266 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-15 03:37:02.642276 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-15 03:37:02.642293 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-15 03:37:02.642313 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-15 03:37:02.642339 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-15 03:37:02.642353 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-15 03:37:02.642369 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-15 03:37:02.642385 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-15 03:37:02.642400 | orchestrator | 2026-02-15 03:37:02.642417 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 03:37:02.642432 | orchestrator | Sunday 15 February 2026 03:36:28 +0000 (0:00:18.880) 0:00:31.592 ******* 2026-02-15 03:37:02.642446 | orchestrator | 2026-02-15 03:37:02.642460 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 03:37:02.642474 | orchestrator | Sunday 15 February 2026 03:36:29 +0000 (0:00:00.249) 0:00:31.842 ******* 2026-02-15 03:37:02.642488 | orchestrator | 2026-02-15 03:37:02.642502 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 03:37:02.642517 | orchestrator | Sunday 15 February 2026 03:36:29 +0000 (0:00:00.065) 0:00:31.908 ******* 2026-02-15 03:37:02.642531 | orchestrator | 2026-02-15 03:37:02.642545 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 03:37:02.642560 | orchestrator | Sunday 15 February 2026 03:36:29 +0000 (0:00:00.065) 0:00:31.974 ******* 2026-02-15 03:37:02.642608 | orchestrator | 2026-02-15 03:37:02.642623 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 03:37:02.642638 | orchestrator | Sunday 15 February 2026 03:36:29 +0000 (0:00:00.065) 0:00:32.039 ******* 2026-02-15 03:37:02.642654 | orchestrator | 2026-02-15 03:37:02.642669 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 03:37:02.642689 | orchestrator | Sunday 15 February 2026 03:36:29 +0000 (0:00:00.067) 0:00:32.107 ******* 2026-02-15 03:37:02.642705 | orchestrator | 2026-02-15 03:37:02.642721 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-02-15 03:37:02.642735 | orchestrator | Sunday 15 February 2026 03:36:29 +0000 (0:00:00.068) 0:00:32.175 ******* 2026-02-15 03:37:02.642749 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:37:02.642778 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:02.642795 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:37:02.642811 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:37:02.642826 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:02.642843 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:02.642859 | orchestrator | 2026-02-15 03:37:02.642875 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-15 03:37:02.642892 | orchestrator | Sunday 15 February 2026 03:36:31 +0000 (0:00:01.769) 0:00:33.945 ******* 2026-02-15 03:37:02.642909 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:37:02.642920 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:37:02.642930 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:37:02.642940 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:37:02.642949 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:37:02.642958 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:37:02.642968 | orchestrator | 2026-02-15 03:37:02.642978 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-15 03:37:02.642988 | orchestrator | 2026-02-15 03:37:02.642997 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-15 03:37:02.643007 | orchestrator | Sunday 15 February 2026 03:37:00 +0000 (0:00:29.095) 0:01:03.040 ******* 2026-02-15 03:37:02.643017 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:37:02.643026 | orchestrator | 2026-02-15 03:37:02.643036 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-15 03:37:02.643046 | orchestrator | Sunday 15 February 2026 03:37:01 +0000 (0:00:00.756) 0:01:03.797 ******* 2026-02-15 03:37:02.643056 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:37:02.643066 | orchestrator | 2026-02-15 03:37:02.643076 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-15 03:37:02.643085 | orchestrator | Sunday 15 February 2026 03:37:01 +0000 (0:00:00.589) 0:01:04.386 ******* 2026-02-15 03:37:02.643103 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:02.643113 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:02.643123 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:02.643133 | orchestrator | 2026-02-15 03:37:02.643143 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-15 03:37:02.643164 | orchestrator | Sunday 15 February 2026 03:37:02 +0000 (0:00:00.971) 0:01:05.358 ******* 2026-02-15 03:37:14.681535 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:14.681708 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:14.681725 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:14.681737 | orchestrator | 2026-02-15 03:37:14.681750 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-15 03:37:14.681763 | orchestrator | Sunday 15 February 2026 03:37:02 +0000 (0:00:00.371) 0:01:05.730 ******* 2026-02-15 03:37:14.681774 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:14.681785 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:14.681796 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:14.681809 | orchestrator | 2026-02-15 03:37:14.681821 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-15 03:37:14.681832 | orchestrator | Sunday 15 February 2026 03:37:03 +0000 (0:00:00.358) 0:01:06.088 ******* 2026-02-15 03:37:14.681843 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:14.681853 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:14.681861 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:14.681867 | orchestrator | 2026-02-15 03:37:14.681875 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-15 03:37:14.681882 | orchestrator | Sunday 15 February 2026 03:37:03 +0000 (0:00:00.353) 0:01:06.441 ******* 2026-02-15 03:37:14.681889 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:14.681895 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:14.681902 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:14.681908 | orchestrator | 2026-02-15 03:37:14.681937 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-15 03:37:14.681945 | orchestrator | Sunday 15 February 2026 03:37:04 +0000 (0:00:00.585) 0:01:07.027 ******* 2026-02-15 03:37:14.681952 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.681960 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.681967 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.681974 | orchestrator | 2026-02-15 03:37:14.681981 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-15 03:37:14.681987 | orchestrator | Sunday 15 February 2026 03:37:04 +0000 (0:00:00.320) 0:01:07.348 ******* 2026-02-15 03:37:14.681994 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682001 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682008 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682055 | orchestrator | 2026-02-15 03:37:14.682065 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-15 03:37:14.682074 | orchestrator | Sunday 15 February 2026 03:37:04 +0000 (0:00:00.335) 0:01:07.683 ******* 2026-02-15 03:37:14.682082 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682090 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682097 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682105 | orchestrator | 2026-02-15 03:37:14.682113 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-15 03:37:14.682121 | orchestrator | Sunday 15 February 2026 03:37:05 +0000 (0:00:00.318) 0:01:08.002 ******* 2026-02-15 03:37:14.682128 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682137 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682144 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682152 | orchestrator | 2026-02-15 03:37:14.682160 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-15 03:37:14.682168 | orchestrator | Sunday 15 February 2026 03:37:05 +0000 (0:00:00.364) 0:01:08.367 ******* 2026-02-15 03:37:14.682175 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682182 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682189 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682196 | orchestrator | 2026-02-15 03:37:14.682203 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-15 03:37:14.682210 | orchestrator | Sunday 15 February 2026 03:37:06 +0000 (0:00:00.543) 0:01:08.910 ******* 2026-02-15 03:37:14.682217 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682224 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682231 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682238 | orchestrator | 2026-02-15 03:37:14.682245 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-15 03:37:14.682252 | orchestrator | Sunday 15 February 2026 03:37:06 +0000 (0:00:00.325) 0:01:09.236 ******* 2026-02-15 03:37:14.682260 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682266 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682273 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682280 | orchestrator | 2026-02-15 03:37:14.682287 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-15 03:37:14.682294 | orchestrator | Sunday 15 February 2026 03:37:06 +0000 (0:00:00.305) 0:01:09.541 ******* 2026-02-15 03:37:14.682302 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682309 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682316 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682323 | orchestrator | 2026-02-15 03:37:14.682330 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-15 03:37:14.682338 | orchestrator | Sunday 15 February 2026 03:37:07 +0000 (0:00:00.312) 0:01:09.854 ******* 2026-02-15 03:37:14.682344 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682351 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682358 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682365 | orchestrator | 2026-02-15 03:37:14.682372 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-15 03:37:14.682386 | orchestrator | Sunday 15 February 2026 03:37:07 +0000 (0:00:00.589) 0:01:10.443 ******* 2026-02-15 03:37:14.682393 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682401 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682409 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682416 | orchestrator | 2026-02-15 03:37:14.682423 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-15 03:37:14.682429 | orchestrator | Sunday 15 February 2026 03:37:08 +0000 (0:00:00.325) 0:01:10.769 ******* 2026-02-15 03:37:14.682446 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682453 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682459 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682466 | orchestrator | 2026-02-15 03:37:14.682472 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-15 03:37:14.682479 | orchestrator | Sunday 15 February 2026 03:37:08 +0000 (0:00:00.310) 0:01:11.079 ******* 2026-02-15 03:37:14.682499 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682506 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682512 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682518 | orchestrator | 2026-02-15 03:37:14.682525 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-15 03:37:14.682531 | orchestrator | Sunday 15 February 2026 03:37:08 +0000 (0:00:00.353) 0:01:11.433 ******* 2026-02-15 03:37:14.682538 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:37:14.682544 | orchestrator | 2026-02-15 03:37:14.682550 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-02-15 03:37:14.682573 | orchestrator | Sunday 15 February 2026 03:37:09 +0000 (0:00:00.879) 0:01:12.312 ******* 2026-02-15 03:37:14.682581 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:14.682587 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:14.682593 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:14.682599 | orchestrator | 2026-02-15 03:37:14.682605 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-02-15 03:37:14.682612 | orchestrator | Sunday 15 February 2026 03:37:10 +0000 (0:00:00.460) 0:01:12.772 ******* 2026-02-15 03:37:14.682618 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:14.682624 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:14.682631 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:14.682641 | orchestrator | 2026-02-15 03:37:14.682651 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-02-15 03:37:14.682661 | orchestrator | Sunday 15 February 2026 03:37:10 +0000 (0:00:00.506) 0:01:13.279 ******* 2026-02-15 03:37:14.682671 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682682 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682692 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682702 | orchestrator | 2026-02-15 03:37:14.682713 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-02-15 03:37:14.682721 | orchestrator | Sunday 15 February 2026 03:37:10 +0000 (0:00:00.405) 0:01:13.685 ******* 2026-02-15 03:37:14.682727 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682733 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682740 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682746 | orchestrator | 2026-02-15 03:37:14.682752 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-02-15 03:37:14.682759 | orchestrator | Sunday 15 February 2026 03:37:11 +0000 (0:00:00.587) 0:01:14.272 ******* 2026-02-15 03:37:14.682765 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682771 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682777 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682783 | orchestrator | 2026-02-15 03:37:14.682789 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-02-15 03:37:14.682796 | orchestrator | Sunday 15 February 2026 03:37:11 +0000 (0:00:00.380) 0:01:14.652 ******* 2026-02-15 03:37:14.682811 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682817 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682823 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682829 | orchestrator | 2026-02-15 03:37:14.682835 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-02-15 03:37:14.682842 | orchestrator | Sunday 15 February 2026 03:37:12 +0000 (0:00:00.365) 0:01:15.018 ******* 2026-02-15 03:37:14.682848 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682854 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682860 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682866 | orchestrator | 2026-02-15 03:37:14.682872 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-02-15 03:37:14.682879 | orchestrator | Sunday 15 February 2026 03:37:12 +0000 (0:00:00.355) 0:01:15.373 ******* 2026-02-15 03:37:14.682885 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:14.682891 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:14.682897 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:14.682903 | orchestrator | 2026-02-15 03:37:14.682909 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-15 03:37:14.682915 | orchestrator | Sunday 15 February 2026 03:37:13 +0000 (0:00:00.571) 0:01:15.945 ******* 2026-02-15 03:37:14.682923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:14.682932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:14.682943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:14.682957 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025846 | orchestrator | 2026-02-15 03:37:21.025854 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-15 03:37:21.025862 | orchestrator | Sunday 15 February 2026 03:37:14 +0000 (0:00:01.454) 0:01:17.399 ******* 2026-02-15 03:37:21.025871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025970 | orchestrator | 2026-02-15 03:37:21.025978 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-15 03:37:21.025985 | orchestrator | Sunday 15 February 2026 03:37:18 +0000 (0:00:03.902) 0:01:21.301 ******* 2026-02-15 03:37:21.025992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.025999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.026006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.026054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.026065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:21.026079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.560495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.560695 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.560725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.560745 | orchestrator | 2026-02-15 03:37:35.560765 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-15 03:37:35.560784 | orchestrator | Sunday 15 February 2026 03:37:20 +0000 (0:00:02.005) 0:01:23.307 ******* 2026-02-15 03:37:35.560800 | orchestrator | 2026-02-15 03:37:35.560817 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-15 03:37:35.560832 | orchestrator | Sunday 15 February 2026 03:37:20 +0000 (0:00:00.074) 0:01:23.381 ******* 2026-02-15 03:37:35.560850 | orchestrator | 2026-02-15 03:37:35.560868 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-15 03:37:35.560885 | orchestrator | Sunday 15 February 2026 03:37:20 +0000 (0:00:00.286) 0:01:23.667 ******* 2026-02-15 03:37:35.560902 | orchestrator | 2026-02-15 03:37:35.560920 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-15 03:37:35.560938 | orchestrator | Sunday 15 February 2026 03:37:21 +0000 (0:00:00.078) 0:01:23.745 ******* 2026-02-15 03:37:35.560956 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:37:35.560974 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:37:35.560992 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:37:35.561009 | orchestrator | 2026-02-15 03:37:35.561022 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-15 03:37:35.561034 | orchestrator | Sunday 15 February 2026 03:37:23 +0000 (0:00:02.582) 0:01:26.328 ******* 2026-02-15 03:37:35.561045 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:37:35.561056 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:37:35.561068 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:37:35.561079 | orchestrator | 2026-02-15 03:37:35.561090 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-15 03:37:35.561101 | orchestrator | Sunday 15 February 2026 03:37:26 +0000 (0:00:02.508) 0:01:28.836 ******* 2026-02-15 03:37:35.561113 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:37:35.561124 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:37:35.561136 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:37:35.561145 | orchestrator | 2026-02-15 03:37:35.561155 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-15 03:37:35.561165 | orchestrator | Sunday 15 February 2026 03:37:28 +0000 (0:00:02.453) 0:01:31.290 ******* 2026-02-15 03:37:35.561175 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:37:35.561184 | orchestrator | 2026-02-15 03:37:35.561194 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-15 03:37:35.561204 | orchestrator | Sunday 15 February 2026 03:37:28 +0000 (0:00:00.130) 0:01:31.420 ******* 2026-02-15 03:37:35.561243 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:35.561254 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:35.561264 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:35.561273 | orchestrator | 2026-02-15 03:37:35.561283 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-15 03:37:35.561293 | orchestrator | Sunday 15 February 2026 03:37:29 +0000 (0:00:01.040) 0:01:32.461 ******* 2026-02-15 03:37:35.561303 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:35.561327 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:35.561337 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:37:35.561347 | orchestrator | 2026-02-15 03:37:35.561357 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-15 03:37:35.561367 | orchestrator | Sunday 15 February 2026 03:37:30 +0000 (0:00:00.628) 0:01:33.090 ******* 2026-02-15 03:37:35.561376 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:35.561386 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:35.561396 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:35.561406 | orchestrator | 2026-02-15 03:37:35.561416 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-15 03:37:35.561426 | orchestrator | Sunday 15 February 2026 03:37:31 +0000 (0:00:00.818) 0:01:33.908 ******* 2026-02-15 03:37:35.561435 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:37:35.561445 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:37:35.561455 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:37:35.561465 | orchestrator | 2026-02-15 03:37:35.561474 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-15 03:37:35.561484 | orchestrator | Sunday 15 February 2026 03:37:31 +0000 (0:00:00.612) 0:01:34.520 ******* 2026-02-15 03:37:35.561494 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:35.561504 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:35.561535 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:35.561545 | orchestrator | 2026-02-15 03:37:35.561633 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-15 03:37:35.561649 | orchestrator | Sunday 15 February 2026 03:37:33 +0000 (0:00:01.271) 0:01:35.792 ******* 2026-02-15 03:37:35.561665 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:35.561681 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:35.561697 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:35.561714 | orchestrator | 2026-02-15 03:37:35.561731 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-02-15 03:37:35.561746 | orchestrator | Sunday 15 February 2026 03:37:33 +0000 (0:00:00.743) 0:01:36.535 ******* 2026-02-15 03:37:35.561761 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:37:35.561777 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:37:35.561793 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:37:35.561809 | orchestrator | 2026-02-15 03:37:35.561825 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-15 03:37:35.561842 | orchestrator | Sunday 15 February 2026 03:37:34 +0000 (0:00:00.322) 0:01:36.857 ******* 2026-02-15 03:37:35.561862 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.561880 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.561897 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.561926 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.561946 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.561964 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.561991 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.562005 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:35.562084 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737080 | orchestrator | 2026-02-15 03:37:42.737187 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-15 03:37:42.737207 | orchestrator | Sunday 15 February 2026 03:37:35 +0000 (0:00:01.416) 0:01:38.273 ******* 2026-02-15 03:37:42.737225 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737243 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737258 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737303 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737350 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737412 | orchestrator | 2026-02-15 03:37:42.737426 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-02-15 03:37:42.737441 | orchestrator | Sunday 15 February 2026 03:37:39 +0000 (0:00:03.804) 0:01:42.077 ******* 2026-02-15 03:37:42.737479 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737495 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737510 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737528 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737609 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 03:37:42.737646 | orchestrator | 2026-02-15 03:37:42.737659 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-15 03:37:42.737674 | orchestrator | Sunday 15 February 2026 03:37:42 +0000 (0:00:03.156) 0:01:45.234 ******* 2026-02-15 03:37:42.737688 | orchestrator | 2026-02-15 03:37:42.737701 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-15 03:37:42.737715 | orchestrator | Sunday 15 February 2026 03:37:42 +0000 (0:00:00.068) 0:01:45.303 ******* 2026-02-15 03:37:42.737730 | orchestrator | 2026-02-15 03:37:42.737745 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-15 03:37:42.737761 | orchestrator | Sunday 15 February 2026 03:37:42 +0000 (0:00:00.069) 0:01:45.372 ******* 2026-02-15 03:37:42.737776 | orchestrator | 2026-02-15 03:37:42.737803 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-15 03:38:07.275351 | orchestrator | Sunday 15 February 2026 03:37:42 +0000 (0:00:00.067) 0:01:45.439 ******* 2026-02-15 03:38:07.275477 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:38:07.275516 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:38:07.275528 | orchestrator | 2026-02-15 03:38:07.275594 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-15 03:38:07.275607 | orchestrator | Sunday 15 February 2026 03:37:48 +0000 (0:00:06.226) 0:01:51.666 ******* 2026-02-15 03:38:07.275618 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:38:07.275629 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:38:07.275640 | orchestrator | 2026-02-15 03:38:07.275651 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-15 03:38:07.275662 | orchestrator | Sunday 15 February 2026 03:37:55 +0000 (0:00:06.192) 0:01:57.858 ******* 2026-02-15 03:38:07.275673 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:38:07.275684 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:38:07.275695 | orchestrator | 2026-02-15 03:38:07.275706 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-15 03:38:07.275717 | orchestrator | Sunday 15 February 2026 03:38:01 +0000 (0:00:06.238) 0:02:04.097 ******* 2026-02-15 03:38:07.275727 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:38:07.275738 | orchestrator | 2026-02-15 03:38:07.275749 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-15 03:38:07.275760 | orchestrator | Sunday 15 February 2026 03:38:01 +0000 (0:00:00.142) 0:02:04.239 ******* 2026-02-15 03:38:07.275771 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:38:07.275783 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:38:07.275794 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:38:07.275805 | orchestrator | 2026-02-15 03:38:07.275816 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-15 03:38:07.275827 | orchestrator | Sunday 15 February 2026 03:38:02 +0000 (0:00:01.125) 0:02:05.365 ******* 2026-02-15 03:38:07.275838 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:38:07.275849 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:38:07.275860 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:38:07.275872 | orchestrator | 2026-02-15 03:38:07.275883 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-15 03:38:07.275896 | orchestrator | Sunday 15 February 2026 03:38:03 +0000 (0:00:00.665) 0:02:06.031 ******* 2026-02-15 03:38:07.275908 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:38:07.275921 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:38:07.275934 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:38:07.275947 | orchestrator | 2026-02-15 03:38:07.275958 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-15 03:38:07.275969 | orchestrator | Sunday 15 February 2026 03:38:04 +0000 (0:00:00.860) 0:02:06.891 ******* 2026-02-15 03:38:07.275980 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:38:07.275991 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:38:07.276007 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:38:07.276026 | orchestrator | 2026-02-15 03:38:07.276045 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-15 03:38:07.276063 | orchestrator | Sunday 15 February 2026 03:38:04 +0000 (0:00:00.632) 0:02:07.523 ******* 2026-02-15 03:38:07.276080 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:38:07.276098 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:38:07.276114 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:38:07.276131 | orchestrator | 2026-02-15 03:38:07.276149 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-15 03:38:07.276165 | orchestrator | Sunday 15 February 2026 03:38:05 +0000 (0:00:01.024) 0:02:08.548 ******* 2026-02-15 03:38:07.276181 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:38:07.276200 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:38:07.276219 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:38:07.276236 | orchestrator | 2026-02-15 03:38:07.276254 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:38:07.276274 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-15 03:38:07.276312 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-15 03:38:07.276330 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-02-15 03:38:07.276371 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:38:07.276385 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:38:07.276396 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:38:07.276407 | orchestrator | 2026-02-15 03:38:07.276419 | orchestrator | 2026-02-15 03:38:07.276430 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:38:07.276441 | orchestrator | Sunday 15 February 2026 03:38:06 +0000 (0:00:00.993) 0:02:09.541 ******* 2026-02-15 03:38:07.276452 | orchestrator | =============================================================================== 2026-02-15 03:38:07.276463 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 29.10s 2026-02-15 03:38:07.276474 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.88s 2026-02-15 03:38:07.276485 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.81s 2026-02-15 03:38:07.276496 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.70s 2026-02-15 03:38:07.276506 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.69s 2026-02-15 03:38:07.276595 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.90s 2026-02-15 03:38:07.276611 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.80s 2026-02-15 03:38:07.276629 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.16s 2026-02-15 03:38:07.276646 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.59s 2026-02-15 03:38:07.276661 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.01s 2026-02-15 03:38:07.276677 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.77s 2026-02-15 03:38:07.276694 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.62s 2026-02-15 03:38:07.276711 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.51s 2026-02-15 03:38:07.276730 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2026-02-15 03:38:07.276749 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.42s 2026-02-15 03:38:07.276768 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.41s 2026-02-15 03:38:07.276783 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.33s 2026-02-15 03:38:07.276795 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.27s 2026-02-15 03:38:07.276806 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.21s 2026-02-15 03:38:07.276818 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.19s 2026-02-15 03:38:07.627109 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-15 03:38:07.627222 | orchestrator | + sh -c /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh 2026-02-15 03:38:09.946207 | orchestrator | 2026-02-15 03:38:09 | INFO  | Trying to run play wipe-partitions in environment custom 2026-02-15 03:38:20.121867 | orchestrator | 2026-02-15 03:38:20 | INFO  | Task 1b4464f3-845a-42ef-a63b-2d389419e9ad (wipe-partitions) was prepared for execution. 2026-02-15 03:38:20.121965 | orchestrator | 2026-02-15 03:38:20 | INFO  | It takes a moment until task 1b4464f3-845a-42ef-a63b-2d389419e9ad (wipe-partitions) has been started and output is visible here. 2026-02-15 03:38:33.771797 | orchestrator | 2026-02-15 03:38:33.771910 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-02-15 03:38:33.771927 | orchestrator | 2026-02-15 03:38:33.771941 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-02-15 03:38:33.771954 | orchestrator | Sunday 15 February 2026 03:38:24 +0000 (0:00:00.179) 0:00:00.179 ******* 2026-02-15 03:38:33.771967 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:38:33.771982 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:38:33.771995 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:38:33.772008 | orchestrator | 2026-02-15 03:38:33.772020 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-02-15 03:38:33.772033 | orchestrator | Sunday 15 February 2026 03:38:25 +0000 (0:00:00.669) 0:00:00.848 ******* 2026-02-15 03:38:33.772045 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:38:33.772058 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:38:33.772070 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:38:33.772083 | orchestrator | 2026-02-15 03:38:33.772095 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-02-15 03:38:33.772108 | orchestrator | Sunday 15 February 2026 03:38:25 +0000 (0:00:00.410) 0:00:01.259 ******* 2026-02-15 03:38:33.772116 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:38:33.772124 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:38:33.772131 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:38:33.772139 | orchestrator | 2026-02-15 03:38:33.772146 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-02-15 03:38:33.772154 | orchestrator | Sunday 15 February 2026 03:38:26 +0000 (0:00:00.656) 0:00:01.916 ******* 2026-02-15 03:38:33.772162 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:38:33.772169 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:38:33.772176 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:38:33.772183 | orchestrator | 2026-02-15 03:38:33.772191 | orchestrator | TASK [Check device availability] *********************************************** 2026-02-15 03:38:33.772212 | orchestrator | Sunday 15 February 2026 03:38:26 +0000 (0:00:00.295) 0:00:02.211 ******* 2026-02-15 03:38:33.772220 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-15 03:38:33.772228 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-15 03:38:33.772236 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-15 03:38:33.772243 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-15 03:38:33.772250 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-15 03:38:33.772257 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-15 03:38:33.772264 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-15 03:38:33.772271 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-15 03:38:33.772278 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-15 03:38:33.772286 | orchestrator | 2026-02-15 03:38:33.772293 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-02-15 03:38:33.772300 | orchestrator | Sunday 15 February 2026 03:38:28 +0000 (0:00:01.315) 0:00:03.527 ******* 2026-02-15 03:38:33.772308 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-02-15 03:38:33.772315 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-02-15 03:38:33.772324 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-02-15 03:38:33.772333 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-02-15 03:38:33.772342 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-02-15 03:38:33.772350 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-02-15 03:38:33.772358 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-02-15 03:38:33.772366 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-02-15 03:38:33.772375 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-02-15 03:38:33.772404 | orchestrator | 2026-02-15 03:38:33.772433 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-02-15 03:38:33.772441 | orchestrator | Sunday 15 February 2026 03:38:29 +0000 (0:00:01.695) 0:00:05.223 ******* 2026-02-15 03:38:33.772449 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-02-15 03:38:33.772458 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-02-15 03:38:33.772466 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-02-15 03:38:33.772475 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-02-15 03:38:33.772483 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-02-15 03:38:33.772492 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-02-15 03:38:33.772500 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-02-15 03:38:33.772509 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-02-15 03:38:33.772517 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-02-15 03:38:33.772586 | orchestrator | 2026-02-15 03:38:33.772601 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-02-15 03:38:33.772613 | orchestrator | Sunday 15 February 2026 03:38:31 +0000 (0:00:02.242) 0:00:07.465 ******* 2026-02-15 03:38:33.772626 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:38:33.772639 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:38:33.772652 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:38:33.772682 | orchestrator | 2026-02-15 03:38:33.772690 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-02-15 03:38:33.772697 | orchestrator | Sunday 15 February 2026 03:38:32 +0000 (0:00:00.656) 0:00:08.122 ******* 2026-02-15 03:38:33.772704 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:38:33.772712 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:38:33.772719 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:38:33.772726 | orchestrator | 2026-02-15 03:38:33.772734 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:38:33.772743 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:38:33.772751 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:38:33.772776 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:38:33.772784 | orchestrator | 2026-02-15 03:38:33.772791 | orchestrator | 2026-02-15 03:38:33.772799 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:38:33.772806 | orchestrator | Sunday 15 February 2026 03:38:33 +0000 (0:00:00.727) 0:00:08.849 ******* 2026-02-15 03:38:33.772813 | orchestrator | =============================================================================== 2026-02-15 03:38:33.772821 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.24s 2026-02-15 03:38:33.772828 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.70s 2026-02-15 03:38:33.772835 | orchestrator | Check device availability ----------------------------------------------- 1.32s 2026-02-15 03:38:33.772843 | orchestrator | Request device events from the kernel ----------------------------------- 0.73s 2026-02-15 03:38:33.772850 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.67s 2026-02-15 03:38:33.772857 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.66s 2026-02-15 03:38:33.772864 | orchestrator | Reload udev rules ------------------------------------------------------- 0.66s 2026-02-15 03:38:33.772872 | orchestrator | Remove all rook related logical devices --------------------------------- 0.41s 2026-02-15 03:38:33.772879 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2026-02-15 03:38:46.374343 | orchestrator | 2026-02-15 03:38:46 | INFO  | Task 2f1b6f3c-c400-47fc-b636-ea2c85f09636 (facts) was prepared for execution. 2026-02-15 03:38:46.374459 | orchestrator | 2026-02-15 03:38:46 | INFO  | It takes a moment until task 2f1b6f3c-c400-47fc-b636-ea2c85f09636 (facts) has been started and output is visible here. 2026-02-15 03:39:00.172282 | orchestrator | 2026-02-15 03:39:00.172426 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-15 03:39:00.172455 | orchestrator | 2026-02-15 03:39:00.172472 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-15 03:39:00.172485 | orchestrator | Sunday 15 February 2026 03:38:51 +0000 (0:00:00.306) 0:00:00.306 ******* 2026-02-15 03:39:00.172496 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:39:00.172508 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:39:00.172604 | orchestrator | ok: [testbed-manager] 2026-02-15 03:39:00.172617 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:39:00.172628 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:39:00.172639 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:39:00.172651 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:39:00.172662 | orchestrator | 2026-02-15 03:39:00.172673 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-15 03:39:00.172684 | orchestrator | Sunday 15 February 2026 03:38:52 +0000 (0:00:01.205) 0:00:01.511 ******* 2026-02-15 03:39:00.172695 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:39:00.172708 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:39:00.172719 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:39:00.172730 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:39:00.172741 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:00.172751 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:00.172762 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:00.172773 | orchestrator | 2026-02-15 03:39:00.172785 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-15 03:39:00.172797 | orchestrator | 2026-02-15 03:39:00.172810 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-15 03:39:00.172823 | orchestrator | Sunday 15 February 2026 03:38:53 +0000 (0:00:01.413) 0:00:02.925 ******* 2026-02-15 03:39:00.172836 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:39:00.172849 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:39:00.172861 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:39:00.172874 | orchestrator | ok: [testbed-manager] 2026-02-15 03:39:00.172886 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:39:00.172904 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:39:00.172922 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:39:00.172940 | orchestrator | 2026-02-15 03:39:00.172959 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-15 03:39:00.172978 | orchestrator | 2026-02-15 03:39:00.172997 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-15 03:39:00.173016 | orchestrator | Sunday 15 February 2026 03:38:59 +0000 (0:00:05.348) 0:00:08.273 ******* 2026-02-15 03:39:00.173036 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:39:00.173056 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:39:00.173077 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:39:00.173096 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:39:00.173115 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:00.173131 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:00.173145 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:00.173157 | orchestrator | 2026-02-15 03:39:00.173171 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:39:00.173184 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:39:00.173245 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:39:00.173258 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:39:00.173299 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:39:00.173311 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:39:00.173322 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:39:00.173497 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:39:00.173510 | orchestrator | 2026-02-15 03:39:00.173556 | orchestrator | 2026-02-15 03:39:00.173569 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:39:00.173580 | orchestrator | Sunday 15 February 2026 03:38:59 +0000 (0:00:00.649) 0:00:08.923 ******* 2026-02-15 03:39:00.173591 | orchestrator | =============================================================================== 2026-02-15 03:39:00.173601 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.35s 2026-02-15 03:39:00.173613 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.41s 2026-02-15 03:39:00.173624 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.21s 2026-02-15 03:39:00.173635 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.65s 2026-02-15 03:39:02.901633 | orchestrator | 2026-02-15 03:39:02 | INFO  | Task 178aa375-1dc0-4507-9dc2-01358cdd7f91 (ceph-configure-lvm-volumes) was prepared for execution. 2026-02-15 03:39:02.901777 | orchestrator | 2026-02-15 03:39:02 | INFO  | It takes a moment until task 178aa375-1dc0-4507-9dc2-01358cdd7f91 (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-02-15 03:39:15.851987 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-15 03:39:15.852090 | orchestrator | 2.16.14 2026-02-15 03:39:15.852107 | orchestrator | 2026-02-15 03:39:15.852119 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-15 03:39:15.852130 | orchestrator | 2026-02-15 03:39:15.852141 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-15 03:39:15.852152 | orchestrator | Sunday 15 February 2026 03:39:07 +0000 (0:00:00.368) 0:00:00.368 ******* 2026-02-15 03:39:15.852163 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 03:39:15.852173 | orchestrator | 2026-02-15 03:39:15.852183 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-15 03:39:15.852193 | orchestrator | Sunday 15 February 2026 03:39:08 +0000 (0:00:00.273) 0:00:00.642 ******* 2026-02-15 03:39:15.852203 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:39:15.852213 | orchestrator | 2026-02-15 03:39:15.852224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852233 | orchestrator | Sunday 15 February 2026 03:39:08 +0000 (0:00:00.251) 0:00:00.893 ******* 2026-02-15 03:39:15.852243 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-15 03:39:15.852253 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-15 03:39:15.852263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-15 03:39:15.852273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-15 03:39:15.852283 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-15 03:39:15.852293 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-15 03:39:15.852303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-15 03:39:15.852312 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-15 03:39:15.852344 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-15 03:39:15.852356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-15 03:39:15.852365 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-15 03:39:15.852375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-15 03:39:15.852385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-15 03:39:15.852395 | orchestrator | 2026-02-15 03:39:15.852406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852416 | orchestrator | Sunday 15 February 2026 03:39:08 +0000 (0:00:00.556) 0:00:01.449 ******* 2026-02-15 03:39:15.852426 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.852436 | orchestrator | 2026-02-15 03:39:15.852446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852456 | orchestrator | Sunday 15 February 2026 03:39:09 +0000 (0:00:00.226) 0:00:01.676 ******* 2026-02-15 03:39:15.852466 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.852475 | orchestrator | 2026-02-15 03:39:15.852485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852495 | orchestrator | Sunday 15 February 2026 03:39:09 +0000 (0:00:00.211) 0:00:01.888 ******* 2026-02-15 03:39:15.852505 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.852544 | orchestrator | 2026-02-15 03:39:15.852557 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852569 | orchestrator | Sunday 15 February 2026 03:39:09 +0000 (0:00:00.209) 0:00:02.098 ******* 2026-02-15 03:39:15.852580 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.852591 | orchestrator | 2026-02-15 03:39:15.852603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852615 | orchestrator | Sunday 15 February 2026 03:39:09 +0000 (0:00:00.221) 0:00:02.320 ******* 2026-02-15 03:39:15.852626 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.852637 | orchestrator | 2026-02-15 03:39:15.852648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852659 | orchestrator | Sunday 15 February 2026 03:39:10 +0000 (0:00:00.225) 0:00:02.545 ******* 2026-02-15 03:39:15.852670 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.852681 | orchestrator | 2026-02-15 03:39:15.852692 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852704 | orchestrator | Sunday 15 February 2026 03:39:10 +0000 (0:00:00.230) 0:00:02.775 ******* 2026-02-15 03:39:15.852715 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.852726 | orchestrator | 2026-02-15 03:39:15.852737 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852749 | orchestrator | Sunday 15 February 2026 03:39:10 +0000 (0:00:00.220) 0:00:02.995 ******* 2026-02-15 03:39:15.852760 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.852771 | orchestrator | 2026-02-15 03:39:15.852782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852793 | orchestrator | Sunday 15 February 2026 03:39:10 +0000 (0:00:00.214) 0:00:03.209 ******* 2026-02-15 03:39:15.852804 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45) 2026-02-15 03:39:15.852817 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45) 2026-02-15 03:39:15.852828 | orchestrator | 2026-02-15 03:39:15.852840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852883 | orchestrator | Sunday 15 February 2026 03:39:11 +0000 (0:00:00.477) 0:00:03.687 ******* 2026-02-15 03:39:15.852896 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090) 2026-02-15 03:39:15.852915 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090) 2026-02-15 03:39:15.852926 | orchestrator | 2026-02-15 03:39:15.852936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852946 | orchestrator | Sunday 15 February 2026 03:39:11 +0000 (0:00:00.655) 0:00:04.343 ******* 2026-02-15 03:39:15.852956 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71) 2026-02-15 03:39:15.852966 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71) 2026-02-15 03:39:15.852976 | orchestrator | 2026-02-15 03:39:15.852986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.852996 | orchestrator | Sunday 15 February 2026 03:39:12 +0000 (0:00:00.735) 0:00:05.079 ******* 2026-02-15 03:39:15.853005 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e) 2026-02-15 03:39:15.853015 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e) 2026-02-15 03:39:15.853025 | orchestrator | 2026-02-15 03:39:15.853035 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:15.853045 | orchestrator | Sunday 15 February 2026 03:39:13 +0000 (0:00:00.946) 0:00:06.025 ******* 2026-02-15 03:39:15.853055 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-15 03:39:15.853064 | orchestrator | 2026-02-15 03:39:15.853074 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:15.853084 | orchestrator | Sunday 15 February 2026 03:39:13 +0000 (0:00:00.334) 0:00:06.360 ******* 2026-02-15 03:39:15.853094 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-15 03:39:15.853103 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-15 03:39:15.853113 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-15 03:39:15.853130 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-15 03:39:15.853146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-15 03:39:15.853163 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-15 03:39:15.853178 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-15 03:39:15.853194 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-15 03:39:15.853209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-15 03:39:15.853225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-15 03:39:15.853240 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-15 03:39:15.853256 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-15 03:39:15.853272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-15 03:39:15.853289 | orchestrator | 2026-02-15 03:39:15.853306 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:15.853323 | orchestrator | Sunday 15 February 2026 03:39:14 +0000 (0:00:00.401) 0:00:06.761 ******* 2026-02-15 03:39:15.853340 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.853350 | orchestrator | 2026-02-15 03:39:15.853360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:15.853369 | orchestrator | Sunday 15 February 2026 03:39:14 +0000 (0:00:00.227) 0:00:06.988 ******* 2026-02-15 03:39:15.853379 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.853389 | orchestrator | 2026-02-15 03:39:15.853407 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:15.853416 | orchestrator | Sunday 15 February 2026 03:39:14 +0000 (0:00:00.272) 0:00:07.261 ******* 2026-02-15 03:39:15.853426 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.853435 | orchestrator | 2026-02-15 03:39:15.853445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:15.853455 | orchestrator | Sunday 15 February 2026 03:39:14 +0000 (0:00:00.220) 0:00:07.482 ******* 2026-02-15 03:39:15.853465 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.853475 | orchestrator | 2026-02-15 03:39:15.853485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:15.853494 | orchestrator | Sunday 15 February 2026 03:39:15 +0000 (0:00:00.242) 0:00:07.724 ******* 2026-02-15 03:39:15.853504 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.853553 | orchestrator | 2026-02-15 03:39:15.853566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:15.853576 | orchestrator | Sunday 15 February 2026 03:39:15 +0000 (0:00:00.218) 0:00:07.942 ******* 2026-02-15 03:39:15.853586 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.853595 | orchestrator | 2026-02-15 03:39:15.853605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:15.853615 | orchestrator | Sunday 15 February 2026 03:39:15 +0000 (0:00:00.208) 0:00:08.151 ******* 2026-02-15 03:39:15.853625 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:15.853635 | orchestrator | 2026-02-15 03:39:15.853659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:24.383899 | orchestrator | Sunday 15 February 2026 03:39:15 +0000 (0:00:00.215) 0:00:08.367 ******* 2026-02-15 03:39:24.384037 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384057 | orchestrator | 2026-02-15 03:39:24.384073 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:24.384088 | orchestrator | Sunday 15 February 2026 03:39:16 +0000 (0:00:00.208) 0:00:08.576 ******* 2026-02-15 03:39:24.384101 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-15 03:39:24.384162 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-15 03:39:24.384179 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-15 03:39:24.384193 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-15 03:39:24.384206 | orchestrator | 2026-02-15 03:39:24.384220 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:24.384235 | orchestrator | Sunday 15 February 2026 03:39:17 +0000 (0:00:01.224) 0:00:09.801 ******* 2026-02-15 03:39:24.384248 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384261 | orchestrator | 2026-02-15 03:39:24.384288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:24.384298 | orchestrator | Sunday 15 February 2026 03:39:17 +0000 (0:00:00.225) 0:00:10.027 ******* 2026-02-15 03:39:24.384307 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384315 | orchestrator | 2026-02-15 03:39:24.384323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:24.384331 | orchestrator | Sunday 15 February 2026 03:39:17 +0000 (0:00:00.217) 0:00:10.245 ******* 2026-02-15 03:39:24.384339 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384347 | orchestrator | 2026-02-15 03:39:24.384355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:24.384363 | orchestrator | Sunday 15 February 2026 03:39:17 +0000 (0:00:00.234) 0:00:10.480 ******* 2026-02-15 03:39:24.384371 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384379 | orchestrator | 2026-02-15 03:39:24.384387 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-15 03:39:24.384395 | orchestrator | Sunday 15 February 2026 03:39:18 +0000 (0:00:00.221) 0:00:10.702 ******* 2026-02-15 03:39:24.384403 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-02-15 03:39:24.384410 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-02-15 03:39:24.384439 | orchestrator | 2026-02-15 03:39:24.384448 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-15 03:39:24.384456 | orchestrator | Sunday 15 February 2026 03:39:18 +0000 (0:00:00.180) 0:00:10.883 ******* 2026-02-15 03:39:24.384464 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384471 | orchestrator | 2026-02-15 03:39:24.384479 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-15 03:39:24.384488 | orchestrator | Sunday 15 February 2026 03:39:18 +0000 (0:00:00.156) 0:00:11.039 ******* 2026-02-15 03:39:24.384496 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384503 | orchestrator | 2026-02-15 03:39:24.384572 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-15 03:39:24.384587 | orchestrator | Sunday 15 February 2026 03:39:18 +0000 (0:00:00.165) 0:00:11.205 ******* 2026-02-15 03:39:24.384600 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384610 | orchestrator | 2026-02-15 03:39:24.384619 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-15 03:39:24.384627 | orchestrator | Sunday 15 February 2026 03:39:18 +0000 (0:00:00.168) 0:00:11.373 ******* 2026-02-15 03:39:24.384634 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:39:24.384642 | orchestrator | 2026-02-15 03:39:24.384650 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-15 03:39:24.384658 | orchestrator | Sunday 15 February 2026 03:39:18 +0000 (0:00:00.151) 0:00:11.525 ******* 2026-02-15 03:39:24.384667 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11907033-e329-56e1-bf1e-182edc1a3769'}}) 2026-02-15 03:39:24.384675 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '308eeb04-119e-5b1b-acdb-31959eb9ce55'}}) 2026-02-15 03:39:24.384683 | orchestrator | 2026-02-15 03:39:24.384691 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-15 03:39:24.384699 | orchestrator | Sunday 15 February 2026 03:39:19 +0000 (0:00:00.200) 0:00:11.726 ******* 2026-02-15 03:39:24.384708 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11907033-e329-56e1-bf1e-182edc1a3769'}})  2026-02-15 03:39:24.384718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '308eeb04-119e-5b1b-acdb-31959eb9ce55'}})  2026-02-15 03:39:24.384725 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384733 | orchestrator | 2026-02-15 03:39:24.384741 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-15 03:39:24.384750 | orchestrator | Sunday 15 February 2026 03:39:19 +0000 (0:00:00.386) 0:00:12.112 ******* 2026-02-15 03:39:24.384758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11907033-e329-56e1-bf1e-182edc1a3769'}})  2026-02-15 03:39:24.384766 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '308eeb04-119e-5b1b-acdb-31959eb9ce55'}})  2026-02-15 03:39:24.384774 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384781 | orchestrator | 2026-02-15 03:39:24.384789 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-15 03:39:24.384797 | orchestrator | Sunday 15 February 2026 03:39:19 +0000 (0:00:00.176) 0:00:12.289 ******* 2026-02-15 03:39:24.384805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11907033-e329-56e1-bf1e-182edc1a3769'}})  2026-02-15 03:39:24.384847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '308eeb04-119e-5b1b-acdb-31959eb9ce55'}})  2026-02-15 03:39:24.384857 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384865 | orchestrator | 2026-02-15 03:39:24.384873 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-15 03:39:24.384881 | orchestrator | Sunday 15 February 2026 03:39:19 +0000 (0:00:00.151) 0:00:12.440 ******* 2026-02-15 03:39:24.384889 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:39:24.384905 | orchestrator | 2026-02-15 03:39:24.384913 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-15 03:39:24.384921 | orchestrator | Sunday 15 February 2026 03:39:20 +0000 (0:00:00.151) 0:00:12.592 ******* 2026-02-15 03:39:24.384929 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:39:24.384937 | orchestrator | 2026-02-15 03:39:24.384945 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-15 03:39:24.384953 | orchestrator | Sunday 15 February 2026 03:39:20 +0000 (0:00:00.163) 0:00:12.755 ******* 2026-02-15 03:39:24.384961 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.384968 | orchestrator | 2026-02-15 03:39:24.384977 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-15 03:39:24.384985 | orchestrator | Sunday 15 February 2026 03:39:20 +0000 (0:00:00.150) 0:00:12.906 ******* 2026-02-15 03:39:24.384992 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.385000 | orchestrator | 2026-02-15 03:39:24.385008 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-15 03:39:24.385016 | orchestrator | Sunday 15 February 2026 03:39:20 +0000 (0:00:00.149) 0:00:13.055 ******* 2026-02-15 03:39:24.385023 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.385031 | orchestrator | 2026-02-15 03:39:24.385039 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-15 03:39:24.385047 | orchestrator | Sunday 15 February 2026 03:39:20 +0000 (0:00:00.158) 0:00:13.214 ******* 2026-02-15 03:39:24.385054 | orchestrator | ok: [testbed-node-3] => { 2026-02-15 03:39:24.385062 | orchestrator |  "ceph_osd_devices": { 2026-02-15 03:39:24.385070 | orchestrator |  "sdb": { 2026-02-15 03:39:24.385078 | orchestrator |  "osd_lvm_uuid": "11907033-e329-56e1-bf1e-182edc1a3769" 2026-02-15 03:39:24.385086 | orchestrator |  }, 2026-02-15 03:39:24.385094 | orchestrator |  "sdc": { 2026-02-15 03:39:24.385102 | orchestrator |  "osd_lvm_uuid": "308eeb04-119e-5b1b-acdb-31959eb9ce55" 2026-02-15 03:39:24.385110 | orchestrator |  } 2026-02-15 03:39:24.385118 | orchestrator |  } 2026-02-15 03:39:24.385126 | orchestrator | } 2026-02-15 03:39:24.385140 | orchestrator | 2026-02-15 03:39:24.385153 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-15 03:39:24.385166 | orchestrator | Sunday 15 February 2026 03:39:20 +0000 (0:00:00.160) 0:00:13.374 ******* 2026-02-15 03:39:24.385178 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.385191 | orchestrator | 2026-02-15 03:39:24.385204 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-15 03:39:24.385218 | orchestrator | Sunday 15 February 2026 03:39:21 +0000 (0:00:00.177) 0:00:13.552 ******* 2026-02-15 03:39:24.385232 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.385246 | orchestrator | 2026-02-15 03:39:24.385259 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-15 03:39:24.385272 | orchestrator | Sunday 15 February 2026 03:39:21 +0000 (0:00:00.171) 0:00:13.723 ******* 2026-02-15 03:39:24.385285 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:39:24.385299 | orchestrator | 2026-02-15 03:39:24.385311 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-15 03:39:24.385319 | orchestrator | Sunday 15 February 2026 03:39:21 +0000 (0:00:00.161) 0:00:13.885 ******* 2026-02-15 03:39:24.385327 | orchestrator | changed: [testbed-node-3] => { 2026-02-15 03:39:24.385335 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-15 03:39:24.385343 | orchestrator |  "ceph_osd_devices": { 2026-02-15 03:39:24.385351 | orchestrator |  "sdb": { 2026-02-15 03:39:24.385359 | orchestrator |  "osd_lvm_uuid": "11907033-e329-56e1-bf1e-182edc1a3769" 2026-02-15 03:39:24.385366 | orchestrator |  }, 2026-02-15 03:39:24.385374 | orchestrator |  "sdc": { 2026-02-15 03:39:24.385382 | orchestrator |  "osd_lvm_uuid": "308eeb04-119e-5b1b-acdb-31959eb9ce55" 2026-02-15 03:39:24.385391 | orchestrator |  } 2026-02-15 03:39:24.385399 | orchestrator |  }, 2026-02-15 03:39:24.385407 | orchestrator |  "lvm_volumes": [ 2026-02-15 03:39:24.385423 | orchestrator |  { 2026-02-15 03:39:24.385431 | orchestrator |  "data": "osd-block-11907033-e329-56e1-bf1e-182edc1a3769", 2026-02-15 03:39:24.385439 | orchestrator |  "data_vg": "ceph-11907033-e329-56e1-bf1e-182edc1a3769" 2026-02-15 03:39:24.385447 | orchestrator |  }, 2026-02-15 03:39:24.385455 | orchestrator |  { 2026-02-15 03:39:24.385462 | orchestrator |  "data": "osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55", 2026-02-15 03:39:24.385470 | orchestrator |  "data_vg": "ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55" 2026-02-15 03:39:24.385478 | orchestrator |  } 2026-02-15 03:39:24.385486 | orchestrator |  ] 2026-02-15 03:39:24.385494 | orchestrator |  } 2026-02-15 03:39:24.385502 | orchestrator | } 2026-02-15 03:39:24.385529 | orchestrator | 2026-02-15 03:39:24.385539 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-15 03:39:24.385546 | orchestrator | Sunday 15 February 2026 03:39:21 +0000 (0:00:00.442) 0:00:14.328 ******* 2026-02-15 03:39:24.385554 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 03:39:24.385562 | orchestrator | 2026-02-15 03:39:24.385570 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-15 03:39:24.385577 | orchestrator | 2026-02-15 03:39:24.385585 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-15 03:39:24.385593 | orchestrator | Sunday 15 February 2026 03:39:23 +0000 (0:00:01.993) 0:00:16.322 ******* 2026-02-15 03:39:24.385601 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-15 03:39:24.385609 | orchestrator | 2026-02-15 03:39:24.385616 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-15 03:39:24.385624 | orchestrator | Sunday 15 February 2026 03:39:24 +0000 (0:00:00.288) 0:00:16.610 ******* 2026-02-15 03:39:24.385632 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:39:24.385640 | orchestrator | 2026-02-15 03:39:24.385661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.478973 | orchestrator | Sunday 15 February 2026 03:39:24 +0000 (0:00:00.291) 0:00:16.901 ******* 2026-02-15 03:39:34.479077 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-15 03:39:34.479091 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-15 03:39:34.479102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-15 03:39:34.479112 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-15 03:39:34.479122 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-15 03:39:34.479132 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-15 03:39:34.479142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-15 03:39:34.479151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-15 03:39:34.479161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-15 03:39:34.479170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-15 03:39:34.479180 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-15 03:39:34.479189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-15 03:39:34.479199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-15 03:39:34.479209 | orchestrator | 2026-02-15 03:39:34.479221 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479230 | orchestrator | Sunday 15 February 2026 03:39:24 +0000 (0:00:00.426) 0:00:17.327 ******* 2026-02-15 03:39:34.479248 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.479294 | orchestrator | 2026-02-15 03:39:34.479313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479330 | orchestrator | Sunday 15 February 2026 03:39:25 +0000 (0:00:00.213) 0:00:17.541 ******* 2026-02-15 03:39:34.479347 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.479361 | orchestrator | 2026-02-15 03:39:34.479378 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479395 | orchestrator | Sunday 15 February 2026 03:39:25 +0000 (0:00:00.221) 0:00:17.762 ******* 2026-02-15 03:39:34.479412 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.479429 | orchestrator | 2026-02-15 03:39:34.479446 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479463 | orchestrator | Sunday 15 February 2026 03:39:25 +0000 (0:00:00.236) 0:00:17.999 ******* 2026-02-15 03:39:34.479479 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.479496 | orchestrator | 2026-02-15 03:39:34.479541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479559 | orchestrator | Sunday 15 February 2026 03:39:26 +0000 (0:00:00.691) 0:00:18.691 ******* 2026-02-15 03:39:34.479575 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.479589 | orchestrator | 2026-02-15 03:39:34.479601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479612 | orchestrator | Sunday 15 February 2026 03:39:26 +0000 (0:00:00.243) 0:00:18.934 ******* 2026-02-15 03:39:34.479623 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.479632 | orchestrator | 2026-02-15 03:39:34.479642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479652 | orchestrator | Sunday 15 February 2026 03:39:26 +0000 (0:00:00.237) 0:00:19.172 ******* 2026-02-15 03:39:34.479661 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.479671 | orchestrator | 2026-02-15 03:39:34.479680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479690 | orchestrator | Sunday 15 February 2026 03:39:26 +0000 (0:00:00.230) 0:00:19.403 ******* 2026-02-15 03:39:34.479700 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.479709 | orchestrator | 2026-02-15 03:39:34.479719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479729 | orchestrator | Sunday 15 February 2026 03:39:27 +0000 (0:00:00.232) 0:00:19.636 ******* 2026-02-15 03:39:34.479740 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006) 2026-02-15 03:39:34.479751 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006) 2026-02-15 03:39:34.479761 | orchestrator | 2026-02-15 03:39:34.479770 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479780 | orchestrator | Sunday 15 February 2026 03:39:27 +0000 (0:00:00.491) 0:00:20.128 ******* 2026-02-15 03:39:34.479790 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57) 2026-02-15 03:39:34.479799 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57) 2026-02-15 03:39:34.479809 | orchestrator | 2026-02-15 03:39:34.479819 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479829 | orchestrator | Sunday 15 February 2026 03:39:28 +0000 (0:00:00.474) 0:00:20.602 ******* 2026-02-15 03:39:34.479838 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0) 2026-02-15 03:39:34.479848 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0) 2026-02-15 03:39:34.479857 | orchestrator | 2026-02-15 03:39:34.479883 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479911 | orchestrator | Sunday 15 February 2026 03:39:28 +0000 (0:00:00.449) 0:00:21.051 ******* 2026-02-15 03:39:34.479922 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7) 2026-02-15 03:39:34.479941 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7) 2026-02-15 03:39:34.479951 | orchestrator | 2026-02-15 03:39:34.479961 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:34.479971 | orchestrator | Sunday 15 February 2026 03:39:29 +0000 (0:00:00.716) 0:00:21.768 ******* 2026-02-15 03:39:34.479981 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-15 03:39:34.479991 | orchestrator | 2026-02-15 03:39:34.480001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480010 | orchestrator | Sunday 15 February 2026 03:39:29 +0000 (0:00:00.648) 0:00:22.416 ******* 2026-02-15 03:39:34.480020 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-15 03:39:34.480030 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-15 03:39:34.480040 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-15 03:39:34.480049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-15 03:39:34.480059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-15 03:39:34.480069 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-15 03:39:34.480078 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-15 03:39:34.480089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-15 03:39:34.480098 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-15 03:39:34.480108 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-15 03:39:34.480118 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-15 03:39:34.480127 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-15 03:39:34.480137 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-15 03:39:34.480147 | orchestrator | 2026-02-15 03:39:34.480156 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480166 | orchestrator | Sunday 15 February 2026 03:39:30 +0000 (0:00:00.938) 0:00:23.354 ******* 2026-02-15 03:39:34.480176 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.480186 | orchestrator | 2026-02-15 03:39:34.480196 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480205 | orchestrator | Sunday 15 February 2026 03:39:31 +0000 (0:00:00.240) 0:00:23.594 ******* 2026-02-15 03:39:34.480215 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.480225 | orchestrator | 2026-02-15 03:39:34.480235 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480245 | orchestrator | Sunday 15 February 2026 03:39:31 +0000 (0:00:00.239) 0:00:23.834 ******* 2026-02-15 03:39:34.480254 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.480264 | orchestrator | 2026-02-15 03:39:34.480274 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480284 | orchestrator | Sunday 15 February 2026 03:39:31 +0000 (0:00:00.220) 0:00:24.054 ******* 2026-02-15 03:39:34.480293 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.480303 | orchestrator | 2026-02-15 03:39:34.480313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480323 | orchestrator | Sunday 15 February 2026 03:39:31 +0000 (0:00:00.241) 0:00:24.296 ******* 2026-02-15 03:39:34.480332 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.480342 | orchestrator | 2026-02-15 03:39:34.480352 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480369 | orchestrator | Sunday 15 February 2026 03:39:31 +0000 (0:00:00.219) 0:00:24.516 ******* 2026-02-15 03:39:34.480385 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.480401 | orchestrator | 2026-02-15 03:39:34.480418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480434 | orchestrator | Sunday 15 February 2026 03:39:32 +0000 (0:00:00.279) 0:00:24.796 ******* 2026-02-15 03:39:34.480451 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.480468 | orchestrator | 2026-02-15 03:39:34.480485 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480502 | orchestrator | Sunday 15 February 2026 03:39:32 +0000 (0:00:00.254) 0:00:25.050 ******* 2026-02-15 03:39:34.480612 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:34.480623 | orchestrator | 2026-02-15 03:39:34.480633 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480643 | orchestrator | Sunday 15 February 2026 03:39:32 +0000 (0:00:00.237) 0:00:25.288 ******* 2026-02-15 03:39:34.480653 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-15 03:39:34.480663 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-15 03:39:34.480673 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-15 03:39:34.480682 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-15 03:39:34.480692 | orchestrator | 2026-02-15 03:39:34.480702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:34.480720 | orchestrator | Sunday 15 February 2026 03:39:33 +0000 (0:00:00.986) 0:00:26.274 ******* 2026-02-15 03:39:34.480730 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107249 | orchestrator | 2026-02-15 03:39:41.107338 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:41.107349 | orchestrator | Sunday 15 February 2026 03:39:34 +0000 (0:00:00.723) 0:00:26.997 ******* 2026-02-15 03:39:41.107357 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107365 | orchestrator | 2026-02-15 03:39:41.107372 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:41.107379 | orchestrator | Sunday 15 February 2026 03:39:34 +0000 (0:00:00.265) 0:00:27.263 ******* 2026-02-15 03:39:41.107385 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107392 | orchestrator | 2026-02-15 03:39:41.107398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:41.107404 | orchestrator | Sunday 15 February 2026 03:39:34 +0000 (0:00:00.237) 0:00:27.501 ******* 2026-02-15 03:39:41.107411 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107417 | orchestrator | 2026-02-15 03:39:41.107423 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-15 03:39:41.107430 | orchestrator | Sunday 15 February 2026 03:39:35 +0000 (0:00:00.260) 0:00:27.761 ******* 2026-02-15 03:39:41.107436 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-02-15 03:39:41.107443 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-02-15 03:39:41.107449 | orchestrator | 2026-02-15 03:39:41.107455 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-15 03:39:41.107462 | orchestrator | Sunday 15 February 2026 03:39:35 +0000 (0:00:00.206) 0:00:27.968 ******* 2026-02-15 03:39:41.107468 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107474 | orchestrator | 2026-02-15 03:39:41.107481 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-15 03:39:41.107487 | orchestrator | Sunday 15 February 2026 03:39:35 +0000 (0:00:00.155) 0:00:28.123 ******* 2026-02-15 03:39:41.107493 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107499 | orchestrator | 2026-02-15 03:39:41.107506 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-15 03:39:41.107541 | orchestrator | Sunday 15 February 2026 03:39:35 +0000 (0:00:00.147) 0:00:28.271 ******* 2026-02-15 03:39:41.107548 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107574 | orchestrator | 2026-02-15 03:39:41.107581 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-15 03:39:41.107587 | orchestrator | Sunday 15 February 2026 03:39:35 +0000 (0:00:00.148) 0:00:28.419 ******* 2026-02-15 03:39:41.107594 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:39:41.107601 | orchestrator | 2026-02-15 03:39:41.107607 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-15 03:39:41.107614 | orchestrator | Sunday 15 February 2026 03:39:36 +0000 (0:00:00.155) 0:00:28.575 ******* 2026-02-15 03:39:41.107620 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85fe8ada-5694-5853-9626-8b4c90604800'}}) 2026-02-15 03:39:41.107627 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '12f88160-c11a-5ad6-adc7-3b0cfe47daee'}}) 2026-02-15 03:39:41.107633 | orchestrator | 2026-02-15 03:39:41.107640 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-15 03:39:41.107646 | orchestrator | Sunday 15 February 2026 03:39:36 +0000 (0:00:00.194) 0:00:28.769 ******* 2026-02-15 03:39:41.107653 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85fe8ada-5694-5853-9626-8b4c90604800'}})  2026-02-15 03:39:41.107661 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '12f88160-c11a-5ad6-adc7-3b0cfe47daee'}})  2026-02-15 03:39:41.107667 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107673 | orchestrator | 2026-02-15 03:39:41.107680 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-15 03:39:41.107686 | orchestrator | Sunday 15 February 2026 03:39:36 +0000 (0:00:00.176) 0:00:28.946 ******* 2026-02-15 03:39:41.107692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85fe8ada-5694-5853-9626-8b4c90604800'}})  2026-02-15 03:39:41.107699 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '12f88160-c11a-5ad6-adc7-3b0cfe47daee'}})  2026-02-15 03:39:41.107705 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107711 | orchestrator | 2026-02-15 03:39:41.107717 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-15 03:39:41.107724 | orchestrator | Sunday 15 February 2026 03:39:36 +0000 (0:00:00.440) 0:00:29.386 ******* 2026-02-15 03:39:41.107730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85fe8ada-5694-5853-9626-8b4c90604800'}})  2026-02-15 03:39:41.107736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '12f88160-c11a-5ad6-adc7-3b0cfe47daee'}})  2026-02-15 03:39:41.107742 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107748 | orchestrator | 2026-02-15 03:39:41.107754 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-15 03:39:41.107761 | orchestrator | Sunday 15 February 2026 03:39:37 +0000 (0:00:00.184) 0:00:29.571 ******* 2026-02-15 03:39:41.107767 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:39:41.107773 | orchestrator | 2026-02-15 03:39:41.107779 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-15 03:39:41.107785 | orchestrator | Sunday 15 February 2026 03:39:37 +0000 (0:00:00.153) 0:00:29.724 ******* 2026-02-15 03:39:41.107792 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:39:41.107798 | orchestrator | 2026-02-15 03:39:41.107804 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-15 03:39:41.107822 | orchestrator | Sunday 15 February 2026 03:39:37 +0000 (0:00:00.149) 0:00:29.873 ******* 2026-02-15 03:39:41.107841 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107849 | orchestrator | 2026-02-15 03:39:41.107856 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-15 03:39:41.107863 | orchestrator | Sunday 15 February 2026 03:39:37 +0000 (0:00:00.162) 0:00:30.036 ******* 2026-02-15 03:39:41.107870 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107877 | orchestrator | 2026-02-15 03:39:41.107889 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-15 03:39:41.107897 | orchestrator | Sunday 15 February 2026 03:39:37 +0000 (0:00:00.154) 0:00:30.190 ******* 2026-02-15 03:39:41.107904 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.107911 | orchestrator | 2026-02-15 03:39:41.107919 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-15 03:39:41.107926 | orchestrator | Sunday 15 February 2026 03:39:37 +0000 (0:00:00.155) 0:00:30.346 ******* 2026-02-15 03:39:41.107933 | orchestrator | ok: [testbed-node-4] => { 2026-02-15 03:39:41.107941 | orchestrator |  "ceph_osd_devices": { 2026-02-15 03:39:41.107948 | orchestrator |  "sdb": { 2026-02-15 03:39:41.107956 | orchestrator |  "osd_lvm_uuid": "85fe8ada-5694-5853-9626-8b4c90604800" 2026-02-15 03:39:41.107963 | orchestrator |  }, 2026-02-15 03:39:41.107971 | orchestrator |  "sdc": { 2026-02-15 03:39:41.107978 | orchestrator |  "osd_lvm_uuid": "12f88160-c11a-5ad6-adc7-3b0cfe47daee" 2026-02-15 03:39:41.107985 | orchestrator |  } 2026-02-15 03:39:41.107992 | orchestrator |  } 2026-02-15 03:39:41.107999 | orchestrator | } 2026-02-15 03:39:41.108007 | orchestrator | 2026-02-15 03:39:41.108015 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-15 03:39:41.108022 | orchestrator | Sunday 15 February 2026 03:39:37 +0000 (0:00:00.162) 0:00:30.508 ******* 2026-02-15 03:39:41.108029 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.108037 | orchestrator | 2026-02-15 03:39:41.108044 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-15 03:39:41.108052 | orchestrator | Sunday 15 February 2026 03:39:38 +0000 (0:00:00.161) 0:00:30.669 ******* 2026-02-15 03:39:41.108059 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.108066 | orchestrator | 2026-02-15 03:39:41.108073 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-15 03:39:41.108081 | orchestrator | Sunday 15 February 2026 03:39:38 +0000 (0:00:00.140) 0:00:30.810 ******* 2026-02-15 03:39:41.108089 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:39:41.108096 | orchestrator | 2026-02-15 03:39:41.108103 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-15 03:39:41.108110 | orchestrator | Sunday 15 February 2026 03:39:38 +0000 (0:00:00.126) 0:00:30.936 ******* 2026-02-15 03:39:41.108118 | orchestrator | changed: [testbed-node-4] => { 2026-02-15 03:39:41.108125 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-15 03:39:41.108132 | orchestrator |  "ceph_osd_devices": { 2026-02-15 03:39:41.108140 | orchestrator |  "sdb": { 2026-02-15 03:39:41.108147 | orchestrator |  "osd_lvm_uuid": "85fe8ada-5694-5853-9626-8b4c90604800" 2026-02-15 03:39:41.108155 | orchestrator |  }, 2026-02-15 03:39:41.108162 | orchestrator |  "sdc": { 2026-02-15 03:39:41.108169 | orchestrator |  "osd_lvm_uuid": "12f88160-c11a-5ad6-adc7-3b0cfe47daee" 2026-02-15 03:39:41.108175 | orchestrator |  } 2026-02-15 03:39:41.108182 | orchestrator |  }, 2026-02-15 03:39:41.108188 | orchestrator |  "lvm_volumes": [ 2026-02-15 03:39:41.108195 | orchestrator |  { 2026-02-15 03:39:41.108201 | orchestrator |  "data": "osd-block-85fe8ada-5694-5853-9626-8b4c90604800", 2026-02-15 03:39:41.108207 | orchestrator |  "data_vg": "ceph-85fe8ada-5694-5853-9626-8b4c90604800" 2026-02-15 03:39:41.108213 | orchestrator |  }, 2026-02-15 03:39:41.108220 | orchestrator |  { 2026-02-15 03:39:41.108226 | orchestrator |  "data": "osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee", 2026-02-15 03:39:41.108233 | orchestrator |  "data_vg": "ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee" 2026-02-15 03:39:41.108239 | orchestrator |  } 2026-02-15 03:39:41.108245 | orchestrator |  ] 2026-02-15 03:39:41.108252 | orchestrator |  } 2026-02-15 03:39:41.108258 | orchestrator | } 2026-02-15 03:39:41.108265 | orchestrator | 2026-02-15 03:39:41.108271 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-15 03:39:41.108282 | orchestrator | Sunday 15 February 2026 03:39:38 +0000 (0:00:00.476) 0:00:31.413 ******* 2026-02-15 03:39:41.108288 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-15 03:39:41.108294 | orchestrator | 2026-02-15 03:39:41.108301 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-02-15 03:39:41.108307 | orchestrator | 2026-02-15 03:39:41.108313 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-15 03:39:41.108320 | orchestrator | Sunday 15 February 2026 03:39:40 +0000 (0:00:01.270) 0:00:32.683 ******* 2026-02-15 03:39:41.108326 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-15 03:39:41.108332 | orchestrator | 2026-02-15 03:39:41.108339 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-15 03:39:41.108345 | orchestrator | Sunday 15 February 2026 03:39:40 +0000 (0:00:00.270) 0:00:32.954 ******* 2026-02-15 03:39:41.108351 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:39:41.108358 | orchestrator | 2026-02-15 03:39:41.108364 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:41.108370 | orchestrator | Sunday 15 February 2026 03:39:40 +0000 (0:00:00.267) 0:00:33.221 ******* 2026-02-15 03:39:41.108376 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-15 03:39:41.108383 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-15 03:39:41.108389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-15 03:39:41.108395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-15 03:39:41.108402 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-15 03:39:41.108415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-15 03:39:50.417914 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-15 03:39:50.418063 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-15 03:39:50.418078 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-15 03:39:50.418088 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-15 03:39:50.418098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-15 03:39:50.418107 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-15 03:39:50.418116 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-15 03:39:50.418125 | orchestrator | 2026-02-15 03:39:50.418135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418145 | orchestrator | Sunday 15 February 2026 03:39:41 +0000 (0:00:00.405) 0:00:33.627 ******* 2026-02-15 03:39:50.418154 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.418164 | orchestrator | 2026-02-15 03:39:50.418173 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418182 | orchestrator | Sunday 15 February 2026 03:39:41 +0000 (0:00:00.231) 0:00:33.858 ******* 2026-02-15 03:39:50.418191 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.418199 | orchestrator | 2026-02-15 03:39:50.418208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418217 | orchestrator | Sunday 15 February 2026 03:39:41 +0000 (0:00:00.223) 0:00:34.082 ******* 2026-02-15 03:39:50.418225 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.418234 | orchestrator | 2026-02-15 03:39:50.418243 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418251 | orchestrator | Sunday 15 February 2026 03:39:41 +0000 (0:00:00.212) 0:00:34.295 ******* 2026-02-15 03:39:50.418261 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.418292 | orchestrator | 2026-02-15 03:39:50.418301 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418310 | orchestrator | Sunday 15 February 2026 03:39:42 +0000 (0:00:00.704) 0:00:34.999 ******* 2026-02-15 03:39:50.418318 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.418327 | orchestrator | 2026-02-15 03:39:50.418336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418345 | orchestrator | Sunday 15 February 2026 03:39:42 +0000 (0:00:00.237) 0:00:35.237 ******* 2026-02-15 03:39:50.418353 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.418362 | orchestrator | 2026-02-15 03:39:50.418370 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418379 | orchestrator | Sunday 15 February 2026 03:39:42 +0000 (0:00:00.209) 0:00:35.446 ******* 2026-02-15 03:39:50.418388 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.418397 | orchestrator | 2026-02-15 03:39:50.418405 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418414 | orchestrator | Sunday 15 February 2026 03:39:43 +0000 (0:00:00.219) 0:00:35.666 ******* 2026-02-15 03:39:50.418422 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.418431 | orchestrator | 2026-02-15 03:39:50.418441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418451 | orchestrator | Sunday 15 February 2026 03:39:43 +0000 (0:00:00.219) 0:00:35.886 ******* 2026-02-15 03:39:50.418461 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd) 2026-02-15 03:39:50.418472 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd) 2026-02-15 03:39:50.418482 | orchestrator | 2026-02-15 03:39:50.418491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418501 | orchestrator | Sunday 15 February 2026 03:39:43 +0000 (0:00:00.443) 0:00:36.329 ******* 2026-02-15 03:39:50.418538 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2) 2026-02-15 03:39:50.418548 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2) 2026-02-15 03:39:50.418558 | orchestrator | 2026-02-15 03:39:50.418568 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418578 | orchestrator | Sunday 15 February 2026 03:39:44 +0000 (0:00:00.452) 0:00:36.782 ******* 2026-02-15 03:39:50.418589 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58) 2026-02-15 03:39:50.418599 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58) 2026-02-15 03:39:50.418609 | orchestrator | 2026-02-15 03:39:50.418620 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418630 | orchestrator | Sunday 15 February 2026 03:39:44 +0000 (0:00:00.477) 0:00:37.259 ******* 2026-02-15 03:39:50.418639 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f) 2026-02-15 03:39:50.418649 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f) 2026-02-15 03:39:50.418659 | orchestrator | 2026-02-15 03:39:50.418669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:39:50.418679 | orchestrator | Sunday 15 February 2026 03:39:45 +0000 (0:00:00.468) 0:00:37.728 ******* 2026-02-15 03:39:50.418689 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-15 03:39:50.418698 | orchestrator | 2026-02-15 03:39:50.418721 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.418747 | orchestrator | Sunday 15 February 2026 03:39:45 +0000 (0:00:00.365) 0:00:38.094 ******* 2026-02-15 03:39:50.418758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-15 03:39:50.418775 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-15 03:39:50.418786 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-15 03:39:50.418796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-15 03:39:50.418806 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-15 03:39:50.418815 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-15 03:39:50.418823 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-15 03:39:50.418832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-15 03:39:50.418840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-15 03:39:50.418849 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-15 03:39:50.418858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-15 03:39:50.418866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-15 03:39:50.418875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-15 03:39:50.418883 | orchestrator | 2026-02-15 03:39:50.418892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.418901 | orchestrator | Sunday 15 February 2026 03:39:46 +0000 (0:00:00.631) 0:00:38.726 ******* 2026-02-15 03:39:50.418910 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.418919 | orchestrator | 2026-02-15 03:39:50.418928 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.418936 | orchestrator | Sunday 15 February 2026 03:39:46 +0000 (0:00:00.224) 0:00:38.950 ******* 2026-02-15 03:39:50.418945 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.418954 | orchestrator | 2026-02-15 03:39:50.418973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.418982 | orchestrator | Sunday 15 February 2026 03:39:46 +0000 (0:00:00.231) 0:00:39.182 ******* 2026-02-15 03:39:50.419001 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.419010 | orchestrator | 2026-02-15 03:39:50.419019 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.419027 | orchestrator | Sunday 15 February 2026 03:39:46 +0000 (0:00:00.222) 0:00:39.405 ******* 2026-02-15 03:39:50.419036 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.419045 | orchestrator | 2026-02-15 03:39:50.419053 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.419062 | orchestrator | Sunday 15 February 2026 03:39:47 +0000 (0:00:00.225) 0:00:39.630 ******* 2026-02-15 03:39:50.419071 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.419079 | orchestrator | 2026-02-15 03:39:50.419088 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.419097 | orchestrator | Sunday 15 February 2026 03:39:47 +0000 (0:00:00.240) 0:00:39.871 ******* 2026-02-15 03:39:50.419105 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.419114 | orchestrator | 2026-02-15 03:39:50.419123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.419131 | orchestrator | Sunday 15 February 2026 03:39:47 +0000 (0:00:00.232) 0:00:40.103 ******* 2026-02-15 03:39:50.419140 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.419149 | orchestrator | 2026-02-15 03:39:50.419158 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.419166 | orchestrator | Sunday 15 February 2026 03:39:47 +0000 (0:00:00.221) 0:00:40.324 ******* 2026-02-15 03:39:50.419175 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.419184 | orchestrator | 2026-02-15 03:39:50.419192 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.419207 | orchestrator | Sunday 15 February 2026 03:39:48 +0000 (0:00:00.224) 0:00:40.549 ******* 2026-02-15 03:39:50.419216 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-15 03:39:50.419225 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-15 03:39:50.419234 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-15 03:39:50.419243 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-15 03:39:50.419251 | orchestrator | 2026-02-15 03:39:50.419260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.419269 | orchestrator | Sunday 15 February 2026 03:39:48 +0000 (0:00:00.971) 0:00:41.521 ******* 2026-02-15 03:39:50.419277 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.419286 | orchestrator | 2026-02-15 03:39:50.419294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.419303 | orchestrator | Sunday 15 February 2026 03:39:49 +0000 (0:00:00.217) 0:00:41.738 ******* 2026-02-15 03:39:50.419312 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.419321 | orchestrator | 2026-02-15 03:39:50.419329 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.419338 | orchestrator | Sunday 15 February 2026 03:39:49 +0000 (0:00:00.211) 0:00:41.950 ******* 2026-02-15 03:39:50.419347 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.419355 | orchestrator | 2026-02-15 03:39:50.419364 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:39:50.419378 | orchestrator | Sunday 15 February 2026 03:39:50 +0000 (0:00:00.753) 0:00:42.704 ******* 2026-02-15 03:39:50.419387 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:50.419395 | orchestrator | 2026-02-15 03:39:50.419410 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-02-15 03:39:54.962651 | orchestrator | Sunday 15 February 2026 03:39:50 +0000 (0:00:00.233) 0:00:42.937 ******* 2026-02-15 03:39:54.962759 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-02-15 03:39:54.962776 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-02-15 03:39:54.962789 | orchestrator | 2026-02-15 03:39:54.962802 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-02-15 03:39:54.962814 | orchestrator | Sunday 15 February 2026 03:39:50 +0000 (0:00:00.235) 0:00:43.173 ******* 2026-02-15 03:39:54.962825 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.962837 | orchestrator | 2026-02-15 03:39:54.962849 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-02-15 03:39:54.962860 | orchestrator | Sunday 15 February 2026 03:39:50 +0000 (0:00:00.157) 0:00:43.331 ******* 2026-02-15 03:39:54.962871 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.962882 | orchestrator | 2026-02-15 03:39:54.962894 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-02-15 03:39:54.962905 | orchestrator | Sunday 15 February 2026 03:39:50 +0000 (0:00:00.143) 0:00:43.474 ******* 2026-02-15 03:39:54.962916 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.962927 | orchestrator | 2026-02-15 03:39:54.962937 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-02-15 03:39:54.962949 | orchestrator | Sunday 15 February 2026 03:39:51 +0000 (0:00:00.141) 0:00:43.615 ******* 2026-02-15 03:39:54.962960 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:39:54.962971 | orchestrator | 2026-02-15 03:39:54.962982 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-02-15 03:39:54.962993 | orchestrator | Sunday 15 February 2026 03:39:51 +0000 (0:00:00.169) 0:00:43.785 ******* 2026-02-15 03:39:54.963005 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '37190823-1b54-548e-8f85-c0a5c63b57f9'}}) 2026-02-15 03:39:54.963016 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fe68aa92-7c5f-5213-9184-27150181e978'}}) 2026-02-15 03:39:54.963027 | orchestrator | 2026-02-15 03:39:54.963039 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-02-15 03:39:54.963077 | orchestrator | Sunday 15 February 2026 03:39:51 +0000 (0:00:00.192) 0:00:43.978 ******* 2026-02-15 03:39:54.963089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '37190823-1b54-548e-8f85-c0a5c63b57f9'}})  2026-02-15 03:39:54.963104 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fe68aa92-7c5f-5213-9184-27150181e978'}})  2026-02-15 03:39:54.963116 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.963129 | orchestrator | 2026-02-15 03:39:54.963142 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-02-15 03:39:54.963155 | orchestrator | Sunday 15 February 2026 03:39:51 +0000 (0:00:00.164) 0:00:44.142 ******* 2026-02-15 03:39:54.963167 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '37190823-1b54-548e-8f85-c0a5c63b57f9'}})  2026-02-15 03:39:54.963183 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fe68aa92-7c5f-5213-9184-27150181e978'}})  2026-02-15 03:39:54.963202 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.963221 | orchestrator | 2026-02-15 03:39:54.963240 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-02-15 03:39:54.963259 | orchestrator | Sunday 15 February 2026 03:39:51 +0000 (0:00:00.159) 0:00:44.301 ******* 2026-02-15 03:39:54.963277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '37190823-1b54-548e-8f85-c0a5c63b57f9'}})  2026-02-15 03:39:54.963296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fe68aa92-7c5f-5213-9184-27150181e978'}})  2026-02-15 03:39:54.963316 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.963336 | orchestrator | 2026-02-15 03:39:54.963355 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-02-15 03:39:54.963373 | orchestrator | Sunday 15 February 2026 03:39:51 +0000 (0:00:00.159) 0:00:44.461 ******* 2026-02-15 03:39:54.963392 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:39:54.963410 | orchestrator | 2026-02-15 03:39:54.963428 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-02-15 03:39:54.963447 | orchestrator | Sunday 15 February 2026 03:39:52 +0000 (0:00:00.159) 0:00:44.620 ******* 2026-02-15 03:39:54.963466 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:39:54.963486 | orchestrator | 2026-02-15 03:39:54.963497 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-02-15 03:39:54.963538 | orchestrator | Sunday 15 February 2026 03:39:52 +0000 (0:00:00.396) 0:00:45.017 ******* 2026-02-15 03:39:54.963557 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.963568 | orchestrator | 2026-02-15 03:39:54.963579 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-02-15 03:39:54.963591 | orchestrator | Sunday 15 February 2026 03:39:52 +0000 (0:00:00.152) 0:00:45.169 ******* 2026-02-15 03:39:54.963602 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.963612 | orchestrator | 2026-02-15 03:39:54.963623 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-02-15 03:39:54.963634 | orchestrator | Sunday 15 February 2026 03:39:52 +0000 (0:00:00.153) 0:00:45.323 ******* 2026-02-15 03:39:54.963645 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.963656 | orchestrator | 2026-02-15 03:39:54.963667 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-02-15 03:39:54.963694 | orchestrator | Sunday 15 February 2026 03:39:52 +0000 (0:00:00.151) 0:00:45.474 ******* 2026-02-15 03:39:54.963706 | orchestrator | ok: [testbed-node-5] => { 2026-02-15 03:39:54.963718 | orchestrator |  "ceph_osd_devices": { 2026-02-15 03:39:54.963729 | orchestrator |  "sdb": { 2026-02-15 03:39:54.963763 | orchestrator |  "osd_lvm_uuid": "37190823-1b54-548e-8f85-c0a5c63b57f9" 2026-02-15 03:39:54.963775 | orchestrator |  }, 2026-02-15 03:39:54.963787 | orchestrator |  "sdc": { 2026-02-15 03:39:54.963809 | orchestrator |  "osd_lvm_uuid": "fe68aa92-7c5f-5213-9184-27150181e978" 2026-02-15 03:39:54.963821 | orchestrator |  } 2026-02-15 03:39:54.963832 | orchestrator |  } 2026-02-15 03:39:54.963843 | orchestrator | } 2026-02-15 03:39:54.963854 | orchestrator | 2026-02-15 03:39:54.963865 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-02-15 03:39:54.963877 | orchestrator | Sunday 15 February 2026 03:39:53 +0000 (0:00:00.165) 0:00:45.639 ******* 2026-02-15 03:39:54.963887 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.963898 | orchestrator | 2026-02-15 03:39:54.963910 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-02-15 03:39:54.963920 | orchestrator | Sunday 15 February 2026 03:39:53 +0000 (0:00:00.152) 0:00:45.791 ******* 2026-02-15 03:39:54.963931 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.963942 | orchestrator | 2026-02-15 03:39:54.963953 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-02-15 03:39:54.963964 | orchestrator | Sunday 15 February 2026 03:39:53 +0000 (0:00:00.148) 0:00:45.940 ******* 2026-02-15 03:39:54.963975 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:39:54.963986 | orchestrator | 2026-02-15 03:39:54.963997 | orchestrator | TASK [Print configuration data] ************************************************ 2026-02-15 03:39:54.964008 | orchestrator | Sunday 15 February 2026 03:39:53 +0000 (0:00:00.159) 0:00:46.100 ******* 2026-02-15 03:39:54.964019 | orchestrator | changed: [testbed-node-5] => { 2026-02-15 03:39:54.964030 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-02-15 03:39:54.964041 | orchestrator |  "ceph_osd_devices": { 2026-02-15 03:39:54.964052 | orchestrator |  "sdb": { 2026-02-15 03:39:54.964063 | orchestrator |  "osd_lvm_uuid": "37190823-1b54-548e-8f85-c0a5c63b57f9" 2026-02-15 03:39:54.964074 | orchestrator |  }, 2026-02-15 03:39:54.964084 | orchestrator |  "sdc": { 2026-02-15 03:39:54.964095 | orchestrator |  "osd_lvm_uuid": "fe68aa92-7c5f-5213-9184-27150181e978" 2026-02-15 03:39:54.964106 | orchestrator |  } 2026-02-15 03:39:54.964117 | orchestrator |  }, 2026-02-15 03:39:54.964128 | orchestrator |  "lvm_volumes": [ 2026-02-15 03:39:54.964139 | orchestrator |  { 2026-02-15 03:39:54.964150 | orchestrator |  "data": "osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9", 2026-02-15 03:39:54.964161 | orchestrator |  "data_vg": "ceph-37190823-1b54-548e-8f85-c0a5c63b57f9" 2026-02-15 03:39:54.964172 | orchestrator |  }, 2026-02-15 03:39:54.964183 | orchestrator |  { 2026-02-15 03:39:54.964194 | orchestrator |  "data": "osd-block-fe68aa92-7c5f-5213-9184-27150181e978", 2026-02-15 03:39:54.964205 | orchestrator |  "data_vg": "ceph-fe68aa92-7c5f-5213-9184-27150181e978" 2026-02-15 03:39:54.964216 | orchestrator |  } 2026-02-15 03:39:54.964227 | orchestrator |  ] 2026-02-15 03:39:54.964238 | orchestrator |  } 2026-02-15 03:39:54.964249 | orchestrator | } 2026-02-15 03:39:54.964260 | orchestrator | 2026-02-15 03:39:54.964271 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-02-15 03:39:54.964282 | orchestrator | Sunday 15 February 2026 03:39:53 +0000 (0:00:00.242) 0:00:46.342 ******* 2026-02-15 03:39:54.964294 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-15 03:39:54.964313 | orchestrator | 2026-02-15 03:39:54.964333 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:39:54.964351 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-15 03:39:54.964371 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-15 03:39:54.964390 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-15 03:39:54.964422 | orchestrator | 2026-02-15 03:39:54.964442 | orchestrator | 2026-02-15 03:39:54.964461 | orchestrator | 2026-02-15 03:39:54.964481 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:39:54.964499 | orchestrator | Sunday 15 February 2026 03:39:54 +0000 (0:00:01.122) 0:00:47.465 ******* 2026-02-15 03:39:54.964546 | orchestrator | =============================================================================== 2026-02-15 03:39:54.964565 | orchestrator | Write configuration file ------------------------------------------------ 4.39s 2026-02-15 03:39:54.964584 | orchestrator | Add known partitions to the list of available block devices ------------- 1.97s 2026-02-15 03:39:54.964602 | orchestrator | Add known links to the list of available block devices ------------------ 1.39s 2026-02-15 03:39:54.964621 | orchestrator | Add known partitions to the list of available block devices ------------- 1.22s 2026-02-15 03:39:54.964639 | orchestrator | Print configuration data ------------------------------------------------ 1.16s 2026-02-15 03:39:54.964659 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2026-02-15 03:39:54.964677 | orchestrator | Add known partitions to the list of available block devices ------------- 0.97s 2026-02-15 03:39:54.964696 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-02-15 03:39:54.964708 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.83s 2026-02-15 03:39:54.964718 | orchestrator | Get initial list of available block devices ----------------------------- 0.81s 2026-02-15 03:39:54.964729 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.78s 2026-02-15 03:39:54.964749 | orchestrator | Add known partitions to the list of available block devices ------------- 0.75s 2026-02-15 03:39:54.964760 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2026-02-15 03:39:54.964783 | orchestrator | Generate lvm_volumes structure (block + db) ----------------------------- 0.73s 2026-02-15 03:39:55.442137 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-02-15 03:39:55.442255 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2026-02-15 03:39:55.442280 | orchestrator | Set OSD devices config data --------------------------------------------- 0.71s 2026-02-15 03:39:55.442300 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-02-15 03:39:55.442320 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-02-15 03:39:55.442339 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-02-15 03:40:18.230805 | orchestrator | 2026-02-15 03:40:18 | INFO  | Task 067da9e7-670c-48d3-ac15-49f2f6eeba29 (sync inventory) is running in background. Output coming soon. 2026-02-15 03:40:50.273456 | orchestrator | 2026-02-15 03:40:19 | INFO  | Starting group_vars file reorganization 2026-02-15 03:40:50.273634 | orchestrator | 2026-02-15 03:40:19 | INFO  | Moved 0 file(s) to their respective directories 2026-02-15 03:40:50.273653 | orchestrator | 2026-02-15 03:40:19 | INFO  | Group_vars file reorganization completed 2026-02-15 03:40:50.273664 | orchestrator | 2026-02-15 03:40:23 | INFO  | Starting variable preparation from inventory 2026-02-15 03:40:50.273675 | orchestrator | 2026-02-15 03:40:26 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-15 03:40:50.273685 | orchestrator | 2026-02-15 03:40:26 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-15 03:40:50.273695 | orchestrator | 2026-02-15 03:40:26 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-15 03:40:50.273705 | orchestrator | 2026-02-15 03:40:26 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-15 03:40:50.273716 | orchestrator | 2026-02-15 03:40:26 | INFO  | Variable preparation completed 2026-02-15 03:40:50.273725 | orchestrator | 2026-02-15 03:40:28 | INFO  | Starting inventory overwrite handling 2026-02-15 03:40:50.273770 | orchestrator | 2026-02-15 03:40:28 | INFO  | Handling group overwrites in 99-overwrite 2026-02-15 03:40:50.273796 | orchestrator | 2026-02-15 03:40:28 | INFO  | Removing group frr:children from 60-generic 2026-02-15 03:40:50.273815 | orchestrator | 2026-02-15 03:40:28 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-15 03:40:50.273831 | orchestrator | 2026-02-15 03:40:28 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-15 03:40:50.273847 | orchestrator | 2026-02-15 03:40:28 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-15 03:40:50.273862 | orchestrator | 2026-02-15 03:40:28 | INFO  | Handling group overwrites in 20-roles 2026-02-15 03:40:50.273876 | orchestrator | 2026-02-15 03:40:28 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-15 03:40:50.273890 | orchestrator | 2026-02-15 03:40:28 | INFO  | Removed 5 group(s) in total 2026-02-15 03:40:50.273905 | orchestrator | 2026-02-15 03:40:28 | INFO  | Inventory overwrite handling completed 2026-02-15 03:40:50.273921 | orchestrator | 2026-02-15 03:40:29 | INFO  | Starting merge of inventory files 2026-02-15 03:40:50.273936 | orchestrator | 2026-02-15 03:40:29 | INFO  | Inventory files merged successfully 2026-02-15 03:40:50.273952 | orchestrator | 2026-02-15 03:40:35 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-15 03:40:50.273971 | orchestrator | 2026-02-15 03:40:48 | INFO  | Successfully wrote ClusterShell configuration 2026-02-15 03:40:50.273990 | orchestrator | [master 4cbe3c5] 2026-02-15-03-40 2026-02-15 03:40:50.274008 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-02-15 03:40:52.836156 | orchestrator | 2026-02-15 03:40:52 | INFO  | Task b474bed4-e5ac-4786-8444-81bbf25c2064 (ceph-create-lvm-devices) was prepared for execution. 2026-02-15 03:40:52.836275 | orchestrator | 2026-02-15 03:40:52 | INFO  | It takes a moment until task b474bed4-e5ac-4786-8444-81bbf25c2064 (ceph-create-lvm-devices) has been started and output is visible here. 2026-02-15 03:41:05.803213 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-15 03:41:05.803363 | orchestrator | 2.16.14 2026-02-15 03:41:05.803380 | orchestrator | 2026-02-15 03:41:05.803392 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-15 03:41:05.803402 | orchestrator | 2026-02-15 03:41:05.803454 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-15 03:41:05.803465 | orchestrator | Sunday 15 February 2026 03:40:57 +0000 (0:00:00.327) 0:00:00.327 ******* 2026-02-15 03:41:05.803476 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 03:41:05.803485 | orchestrator | 2026-02-15 03:41:05.803494 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-15 03:41:05.803549 | orchestrator | Sunday 15 February 2026 03:40:57 +0000 (0:00:00.283) 0:00:00.611 ******* 2026-02-15 03:41:05.803561 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:41:05.803571 | orchestrator | 2026-02-15 03:41:05.803580 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.803589 | orchestrator | Sunday 15 February 2026 03:40:58 +0000 (0:00:00.245) 0:00:00.856 ******* 2026-02-15 03:41:05.803598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-02-15 03:41:05.803606 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-02-15 03:41:05.803615 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-02-15 03:41:05.803624 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-02-15 03:41:05.803633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-02-15 03:41:05.803641 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-02-15 03:41:05.803670 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-02-15 03:41:05.803679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-02-15 03:41:05.803688 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-02-15 03:41:05.803697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-02-15 03:41:05.803706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-02-15 03:41:05.803715 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-02-15 03:41:05.803724 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-02-15 03:41:05.803733 | orchestrator | 2026-02-15 03:41:05.803742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.803752 | orchestrator | Sunday 15 February 2026 03:40:58 +0000 (0:00:00.531) 0:00:01.388 ******* 2026-02-15 03:41:05.803763 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.803773 | orchestrator | 2026-02-15 03:41:05.803784 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.803794 | orchestrator | Sunday 15 February 2026 03:40:58 +0000 (0:00:00.227) 0:00:01.615 ******* 2026-02-15 03:41:05.803805 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.803815 | orchestrator | 2026-02-15 03:41:05.803826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.803836 | orchestrator | Sunday 15 February 2026 03:40:59 +0000 (0:00:00.213) 0:00:01.829 ******* 2026-02-15 03:41:05.803846 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.803856 | orchestrator | 2026-02-15 03:41:05.803867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.803877 | orchestrator | Sunday 15 February 2026 03:40:59 +0000 (0:00:00.228) 0:00:02.058 ******* 2026-02-15 03:41:05.803888 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.803899 | orchestrator | 2026-02-15 03:41:05.803910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.803919 | orchestrator | Sunday 15 February 2026 03:40:59 +0000 (0:00:00.230) 0:00:02.289 ******* 2026-02-15 03:41:05.803928 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.803937 | orchestrator | 2026-02-15 03:41:05.803945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.803954 | orchestrator | Sunday 15 February 2026 03:40:59 +0000 (0:00:00.224) 0:00:02.513 ******* 2026-02-15 03:41:05.803963 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.803971 | orchestrator | 2026-02-15 03:41:05.803980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.803989 | orchestrator | Sunday 15 February 2026 03:40:59 +0000 (0:00:00.213) 0:00:02.727 ******* 2026-02-15 03:41:05.803999 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.804014 | orchestrator | 2026-02-15 03:41:05.804028 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.804043 | orchestrator | Sunday 15 February 2026 03:41:00 +0000 (0:00:00.223) 0:00:02.950 ******* 2026-02-15 03:41:05.804058 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.804072 | orchestrator | 2026-02-15 03:41:05.804085 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.804094 | orchestrator | Sunday 15 February 2026 03:41:00 +0000 (0:00:00.231) 0:00:03.182 ******* 2026-02-15 03:41:05.804103 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45) 2026-02-15 03:41:05.804112 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45) 2026-02-15 03:41:05.804121 | orchestrator | 2026-02-15 03:41:05.804130 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.804163 | orchestrator | Sunday 15 February 2026 03:41:00 +0000 (0:00:00.500) 0:00:03.683 ******* 2026-02-15 03:41:05.804173 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090) 2026-02-15 03:41:05.804182 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090) 2026-02-15 03:41:05.804191 | orchestrator | 2026-02-15 03:41:05.804200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.804209 | orchestrator | Sunday 15 February 2026 03:41:01 +0000 (0:00:00.740) 0:00:04.423 ******* 2026-02-15 03:41:05.804217 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71) 2026-02-15 03:41:05.804231 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71) 2026-02-15 03:41:05.804241 | orchestrator | 2026-02-15 03:41:05.804249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.804258 | orchestrator | Sunday 15 February 2026 03:41:02 +0000 (0:00:00.789) 0:00:05.213 ******* 2026-02-15 03:41:05.804267 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e) 2026-02-15 03:41:05.804276 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e) 2026-02-15 03:41:05.804285 | orchestrator | 2026-02-15 03:41:05.804294 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:05.804306 | orchestrator | Sunday 15 February 2026 03:41:03 +0000 (0:00:00.945) 0:00:06.158 ******* 2026-02-15 03:41:05.804320 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-15 03:41:05.804336 | orchestrator | 2026-02-15 03:41:05.804350 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:05.804366 | orchestrator | Sunday 15 February 2026 03:41:03 +0000 (0:00:00.396) 0:00:06.555 ******* 2026-02-15 03:41:05.804383 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-02-15 03:41:05.804397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-02-15 03:41:05.804409 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-02-15 03:41:05.804424 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-02-15 03:41:05.804439 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-02-15 03:41:05.804453 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-02-15 03:41:05.804468 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-02-15 03:41:05.804479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-02-15 03:41:05.804488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-02-15 03:41:05.804497 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-02-15 03:41:05.804584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-02-15 03:41:05.804597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-02-15 03:41:05.804606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-02-15 03:41:05.804614 | orchestrator | 2026-02-15 03:41:05.804624 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:05.804632 | orchestrator | Sunday 15 February 2026 03:41:04 +0000 (0:00:00.455) 0:00:07.010 ******* 2026-02-15 03:41:05.804641 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.804650 | orchestrator | 2026-02-15 03:41:05.804659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:05.804676 | orchestrator | Sunday 15 February 2026 03:41:04 +0000 (0:00:00.217) 0:00:07.228 ******* 2026-02-15 03:41:05.804685 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.804694 | orchestrator | 2026-02-15 03:41:05.804703 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:05.804711 | orchestrator | Sunday 15 February 2026 03:41:04 +0000 (0:00:00.245) 0:00:07.473 ******* 2026-02-15 03:41:05.804720 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.804729 | orchestrator | 2026-02-15 03:41:05.804738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:05.804746 | orchestrator | Sunday 15 February 2026 03:41:04 +0000 (0:00:00.251) 0:00:07.725 ******* 2026-02-15 03:41:05.804756 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.804771 | orchestrator | 2026-02-15 03:41:05.804786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:05.804799 | orchestrator | Sunday 15 February 2026 03:41:05 +0000 (0:00:00.204) 0:00:07.930 ******* 2026-02-15 03:41:05.804807 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.804816 | orchestrator | 2026-02-15 03:41:05.804825 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:05.804833 | orchestrator | Sunday 15 February 2026 03:41:05 +0000 (0:00:00.226) 0:00:08.156 ******* 2026-02-15 03:41:05.804842 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.804851 | orchestrator | 2026-02-15 03:41:05.804859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:05.804868 | orchestrator | Sunday 15 February 2026 03:41:05 +0000 (0:00:00.223) 0:00:08.379 ******* 2026-02-15 03:41:05.804877 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:05.804886 | orchestrator | 2026-02-15 03:41:05.804902 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:14.381148 | orchestrator | Sunday 15 February 2026 03:41:05 +0000 (0:00:00.206) 0:00:08.586 ******* 2026-02-15 03:41:14.381263 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.381281 | orchestrator | 2026-02-15 03:41:14.381294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:14.381306 | orchestrator | Sunday 15 February 2026 03:41:06 +0000 (0:00:00.689) 0:00:09.276 ******* 2026-02-15 03:41:14.381317 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-02-15 03:41:14.381329 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-02-15 03:41:14.381340 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-02-15 03:41:14.381351 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-02-15 03:41:14.381362 | orchestrator | 2026-02-15 03:41:14.381390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:14.381402 | orchestrator | Sunday 15 February 2026 03:41:07 +0000 (0:00:00.735) 0:00:10.012 ******* 2026-02-15 03:41:14.381413 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.381424 | orchestrator | 2026-02-15 03:41:14.381436 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:14.381447 | orchestrator | Sunday 15 February 2026 03:41:07 +0000 (0:00:00.221) 0:00:10.233 ******* 2026-02-15 03:41:14.381458 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.381469 | orchestrator | 2026-02-15 03:41:14.381480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:14.381492 | orchestrator | Sunday 15 February 2026 03:41:07 +0000 (0:00:00.213) 0:00:10.446 ******* 2026-02-15 03:41:14.381503 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.381547 | orchestrator | 2026-02-15 03:41:14.381558 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:14.381570 | orchestrator | Sunday 15 February 2026 03:41:07 +0000 (0:00:00.207) 0:00:10.654 ******* 2026-02-15 03:41:14.381581 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.381592 | orchestrator | 2026-02-15 03:41:14.381603 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-15 03:41:14.381638 | orchestrator | Sunday 15 February 2026 03:41:08 +0000 (0:00:00.211) 0:00:10.865 ******* 2026-02-15 03:41:14.381650 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.381662 | orchestrator | 2026-02-15 03:41:14.381674 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-15 03:41:14.381688 | orchestrator | Sunday 15 February 2026 03:41:08 +0000 (0:00:00.147) 0:00:11.013 ******* 2026-02-15 03:41:14.381701 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '11907033-e329-56e1-bf1e-182edc1a3769'}}) 2026-02-15 03:41:14.381715 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '308eeb04-119e-5b1b-acdb-31959eb9ce55'}}) 2026-02-15 03:41:14.381727 | orchestrator | 2026-02-15 03:41:14.381740 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-15 03:41:14.381752 | orchestrator | Sunday 15 February 2026 03:41:08 +0000 (0:00:00.190) 0:00:11.203 ******* 2026-02-15 03:41:14.381766 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'}) 2026-02-15 03:41:14.381781 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'}) 2026-02-15 03:41:14.381794 | orchestrator | 2026-02-15 03:41:14.381806 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-15 03:41:14.381819 | orchestrator | Sunday 15 February 2026 03:41:10 +0000 (0:00:02.037) 0:00:13.240 ******* 2026-02-15 03:41:14.381831 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:14.381845 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:14.381858 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.381870 | orchestrator | 2026-02-15 03:41:14.381882 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-15 03:41:14.381895 | orchestrator | Sunday 15 February 2026 03:41:10 +0000 (0:00:00.177) 0:00:13.418 ******* 2026-02-15 03:41:14.381913 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'}) 2026-02-15 03:41:14.381932 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'}) 2026-02-15 03:41:14.381951 | orchestrator | 2026-02-15 03:41:14.381969 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-15 03:41:14.381988 | orchestrator | Sunday 15 February 2026 03:41:12 +0000 (0:00:01.525) 0:00:14.944 ******* 2026-02-15 03:41:14.382005 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:14.382135 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:14.382159 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.382177 | orchestrator | 2026-02-15 03:41:14.382196 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-15 03:41:14.382215 | orchestrator | Sunday 15 February 2026 03:41:12 +0000 (0:00:00.150) 0:00:15.094 ******* 2026-02-15 03:41:14.382259 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.382281 | orchestrator | 2026-02-15 03:41:14.382300 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-15 03:41:14.382318 | orchestrator | Sunday 15 February 2026 03:41:12 +0000 (0:00:00.376) 0:00:15.471 ******* 2026-02-15 03:41:14.382337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:14.382380 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:14.382401 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.382420 | orchestrator | 2026-02-15 03:41:14.382438 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-15 03:41:14.382456 | orchestrator | Sunday 15 February 2026 03:41:12 +0000 (0:00:00.184) 0:00:15.656 ******* 2026-02-15 03:41:14.382476 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.382494 | orchestrator | 2026-02-15 03:41:14.382633 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-15 03:41:14.382656 | orchestrator | Sunday 15 February 2026 03:41:13 +0000 (0:00:00.152) 0:00:15.808 ******* 2026-02-15 03:41:14.382669 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:14.382680 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:14.382691 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.382702 | orchestrator | 2026-02-15 03:41:14.382713 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-15 03:41:14.382724 | orchestrator | Sunday 15 February 2026 03:41:13 +0000 (0:00:00.176) 0:00:15.985 ******* 2026-02-15 03:41:14.382735 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.382745 | orchestrator | 2026-02-15 03:41:14.382756 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-15 03:41:14.382767 | orchestrator | Sunday 15 February 2026 03:41:13 +0000 (0:00:00.162) 0:00:16.148 ******* 2026-02-15 03:41:14.382778 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:14.382789 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:14.382800 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.382811 | orchestrator | 2026-02-15 03:41:14.382822 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-15 03:41:14.382833 | orchestrator | Sunday 15 February 2026 03:41:13 +0000 (0:00:00.166) 0:00:16.314 ******* 2026-02-15 03:41:14.382844 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:41:14.382856 | orchestrator | 2026-02-15 03:41:14.382867 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-15 03:41:14.382878 | orchestrator | Sunday 15 February 2026 03:41:13 +0000 (0:00:00.166) 0:00:16.481 ******* 2026-02-15 03:41:14.382888 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:14.382899 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:14.382910 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.382921 | orchestrator | 2026-02-15 03:41:14.382932 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-15 03:41:14.382943 | orchestrator | Sunday 15 February 2026 03:41:13 +0000 (0:00:00.170) 0:00:16.652 ******* 2026-02-15 03:41:14.382954 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:14.382965 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:14.382976 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.382987 | orchestrator | 2026-02-15 03:41:14.383009 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-15 03:41:14.383020 | orchestrator | Sunday 15 February 2026 03:41:14 +0000 (0:00:00.178) 0:00:16.830 ******* 2026-02-15 03:41:14.383031 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:14.383042 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:14.383052 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.383062 | orchestrator | 2026-02-15 03:41:14.383071 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-15 03:41:14.383081 | orchestrator | Sunday 15 February 2026 03:41:14 +0000 (0:00:00.172) 0:00:17.003 ******* 2026-02-15 03:41:14.383091 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:14.383100 | orchestrator | 2026-02-15 03:41:14.383110 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-15 03:41:14.383131 | orchestrator | Sunday 15 February 2026 03:41:14 +0000 (0:00:00.163) 0:00:17.167 ******* 2026-02-15 03:41:21.434298 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.434389 | orchestrator | 2026-02-15 03:41:21.434404 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-15 03:41:21.434415 | orchestrator | Sunday 15 February 2026 03:41:14 +0000 (0:00:00.149) 0:00:17.317 ******* 2026-02-15 03:41:21.434427 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.434440 | orchestrator | 2026-02-15 03:41:21.434454 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-15 03:41:21.434467 | orchestrator | Sunday 15 February 2026 03:41:14 +0000 (0:00:00.382) 0:00:17.699 ******* 2026-02-15 03:41:21.434498 | orchestrator | ok: [testbed-node-3] => { 2026-02-15 03:41:21.434569 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-15 03:41:21.434583 | orchestrator | } 2026-02-15 03:41:21.434597 | orchestrator | 2026-02-15 03:41:21.434611 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-15 03:41:21.434625 | orchestrator | Sunday 15 February 2026 03:41:15 +0000 (0:00:00.151) 0:00:17.851 ******* 2026-02-15 03:41:21.434640 | orchestrator | ok: [testbed-node-3] => { 2026-02-15 03:41:21.434655 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-15 03:41:21.434669 | orchestrator | } 2026-02-15 03:41:21.434684 | orchestrator | 2026-02-15 03:41:21.434698 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-15 03:41:21.434713 | orchestrator | Sunday 15 February 2026 03:41:15 +0000 (0:00:00.164) 0:00:18.016 ******* 2026-02-15 03:41:21.434728 | orchestrator | ok: [testbed-node-3] => { 2026-02-15 03:41:21.434740 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-15 03:41:21.434756 | orchestrator | } 2026-02-15 03:41:21.434805 | orchestrator | 2026-02-15 03:41:21.434823 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-15 03:41:21.434839 | orchestrator | Sunday 15 February 2026 03:41:15 +0000 (0:00:00.164) 0:00:18.180 ******* 2026-02-15 03:41:21.434854 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:41:21.434870 | orchestrator | 2026-02-15 03:41:21.434885 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-15 03:41:21.434899 | orchestrator | Sunday 15 February 2026 03:41:16 +0000 (0:00:00.687) 0:00:18.868 ******* 2026-02-15 03:41:21.434911 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:41:21.434921 | orchestrator | 2026-02-15 03:41:21.434930 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-15 03:41:21.434939 | orchestrator | Sunday 15 February 2026 03:41:16 +0000 (0:00:00.527) 0:00:19.395 ******* 2026-02-15 03:41:21.434948 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:41:21.434956 | orchestrator | 2026-02-15 03:41:21.434965 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-15 03:41:21.434974 | orchestrator | Sunday 15 February 2026 03:41:17 +0000 (0:00:00.526) 0:00:19.922 ******* 2026-02-15 03:41:21.435005 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:41:21.435014 | orchestrator | 2026-02-15 03:41:21.435023 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-15 03:41:21.435032 | orchestrator | Sunday 15 February 2026 03:41:17 +0000 (0:00:00.157) 0:00:20.080 ******* 2026-02-15 03:41:21.435041 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435049 | orchestrator | 2026-02-15 03:41:21.435058 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-15 03:41:21.435067 | orchestrator | Sunday 15 February 2026 03:41:17 +0000 (0:00:00.131) 0:00:20.212 ******* 2026-02-15 03:41:21.435075 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435084 | orchestrator | 2026-02-15 03:41:21.435093 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-15 03:41:21.435101 | orchestrator | Sunday 15 February 2026 03:41:17 +0000 (0:00:00.138) 0:00:20.350 ******* 2026-02-15 03:41:21.435110 | orchestrator | ok: [testbed-node-3] => { 2026-02-15 03:41:21.435119 | orchestrator |  "vgs_report": { 2026-02-15 03:41:21.435129 | orchestrator |  "vg": [] 2026-02-15 03:41:21.435138 | orchestrator |  } 2026-02-15 03:41:21.435147 | orchestrator | } 2026-02-15 03:41:21.435156 | orchestrator | 2026-02-15 03:41:21.435165 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-15 03:41:21.435174 | orchestrator | Sunday 15 February 2026 03:41:17 +0000 (0:00:00.155) 0:00:20.506 ******* 2026-02-15 03:41:21.435183 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435192 | orchestrator | 2026-02-15 03:41:21.435200 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-15 03:41:21.435209 | orchestrator | Sunday 15 February 2026 03:41:17 +0000 (0:00:00.155) 0:00:20.662 ******* 2026-02-15 03:41:21.435218 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435227 | orchestrator | 2026-02-15 03:41:21.435236 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-15 03:41:21.435244 | orchestrator | Sunday 15 February 2026 03:41:18 +0000 (0:00:00.382) 0:00:21.044 ******* 2026-02-15 03:41:21.435253 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435262 | orchestrator | 2026-02-15 03:41:21.435271 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-15 03:41:21.435280 | orchestrator | Sunday 15 February 2026 03:41:18 +0000 (0:00:00.149) 0:00:21.194 ******* 2026-02-15 03:41:21.435288 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435299 | orchestrator | 2026-02-15 03:41:21.435314 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-15 03:41:21.435332 | orchestrator | Sunday 15 February 2026 03:41:18 +0000 (0:00:00.147) 0:00:21.341 ******* 2026-02-15 03:41:21.435352 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435365 | orchestrator | 2026-02-15 03:41:21.435379 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-15 03:41:21.435392 | orchestrator | Sunday 15 February 2026 03:41:18 +0000 (0:00:00.167) 0:00:21.509 ******* 2026-02-15 03:41:21.435406 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435419 | orchestrator | 2026-02-15 03:41:21.435433 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-15 03:41:21.435447 | orchestrator | Sunday 15 February 2026 03:41:18 +0000 (0:00:00.169) 0:00:21.678 ******* 2026-02-15 03:41:21.435463 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435478 | orchestrator | 2026-02-15 03:41:21.435492 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-15 03:41:21.435528 | orchestrator | Sunday 15 February 2026 03:41:19 +0000 (0:00:00.166) 0:00:21.844 ******* 2026-02-15 03:41:21.435559 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435569 | orchestrator | 2026-02-15 03:41:21.435578 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-15 03:41:21.435587 | orchestrator | Sunday 15 February 2026 03:41:19 +0000 (0:00:00.171) 0:00:22.016 ******* 2026-02-15 03:41:21.435596 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435614 | orchestrator | 2026-02-15 03:41:21.435623 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-15 03:41:21.435632 | orchestrator | Sunday 15 February 2026 03:41:19 +0000 (0:00:00.159) 0:00:22.175 ******* 2026-02-15 03:41:21.435640 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435649 | orchestrator | 2026-02-15 03:41:21.435665 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-15 03:41:21.435675 | orchestrator | Sunday 15 February 2026 03:41:19 +0000 (0:00:00.146) 0:00:22.322 ******* 2026-02-15 03:41:21.435683 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435692 | orchestrator | 2026-02-15 03:41:21.435701 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-15 03:41:21.435710 | orchestrator | Sunday 15 February 2026 03:41:19 +0000 (0:00:00.160) 0:00:22.483 ******* 2026-02-15 03:41:21.435718 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435727 | orchestrator | 2026-02-15 03:41:21.435736 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-15 03:41:21.435744 | orchestrator | Sunday 15 February 2026 03:41:19 +0000 (0:00:00.158) 0:00:22.641 ******* 2026-02-15 03:41:21.435753 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435762 | orchestrator | 2026-02-15 03:41:21.435775 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-15 03:41:21.435794 | orchestrator | Sunday 15 February 2026 03:41:20 +0000 (0:00:00.156) 0:00:22.798 ******* 2026-02-15 03:41:21.435814 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435828 | orchestrator | 2026-02-15 03:41:21.435842 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-15 03:41:21.435855 | orchestrator | Sunday 15 February 2026 03:41:20 +0000 (0:00:00.389) 0:00:23.188 ******* 2026-02-15 03:41:21.435870 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:21.435886 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:21.435900 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.435915 | orchestrator | 2026-02-15 03:41:21.435931 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-15 03:41:21.435945 | orchestrator | Sunday 15 February 2026 03:41:20 +0000 (0:00:00.171) 0:00:23.360 ******* 2026-02-15 03:41:21.435959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:21.435974 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:21.435997 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.436012 | orchestrator | 2026-02-15 03:41:21.436026 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-15 03:41:21.436041 | orchestrator | Sunday 15 February 2026 03:41:20 +0000 (0:00:00.180) 0:00:23.540 ******* 2026-02-15 03:41:21.436053 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:21.436068 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:21.436082 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.436096 | orchestrator | 2026-02-15 03:41:21.436111 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-15 03:41:21.436126 | orchestrator | Sunday 15 February 2026 03:41:20 +0000 (0:00:00.166) 0:00:23.707 ******* 2026-02-15 03:41:21.436141 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:21.436164 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:21.436174 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.436182 | orchestrator | 2026-02-15 03:41:21.436191 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-15 03:41:21.436200 | orchestrator | Sunday 15 February 2026 03:41:21 +0000 (0:00:00.176) 0:00:23.884 ******* 2026-02-15 03:41:21.436209 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:21.436217 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:21.436226 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:21.436235 | orchestrator | 2026-02-15 03:41:21.436244 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-15 03:41:21.436252 | orchestrator | Sunday 15 February 2026 03:41:21 +0000 (0:00:00.165) 0:00:24.049 ******* 2026-02-15 03:41:21.436270 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:27.292183 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:27.292265 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:27.292274 | orchestrator | 2026-02-15 03:41:27.292280 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-15 03:41:27.292298 | orchestrator | Sunday 15 February 2026 03:41:21 +0000 (0:00:00.173) 0:00:24.223 ******* 2026-02-15 03:41:27.292303 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:27.292308 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:27.292313 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:27.292318 | orchestrator | 2026-02-15 03:41:27.292323 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-15 03:41:27.292328 | orchestrator | Sunday 15 February 2026 03:41:21 +0000 (0:00:00.180) 0:00:24.403 ******* 2026-02-15 03:41:27.292332 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:27.292337 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:27.292342 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:27.292346 | orchestrator | 2026-02-15 03:41:27.292351 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-15 03:41:27.292356 | orchestrator | Sunday 15 February 2026 03:41:21 +0000 (0:00:00.172) 0:00:24.575 ******* 2026-02-15 03:41:27.292360 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:41:27.292366 | orchestrator | 2026-02-15 03:41:27.292370 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-15 03:41:27.292375 | orchestrator | Sunday 15 February 2026 03:41:22 +0000 (0:00:00.541) 0:00:25.117 ******* 2026-02-15 03:41:27.292380 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:41:27.292384 | orchestrator | 2026-02-15 03:41:27.292389 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-15 03:41:27.292393 | orchestrator | Sunday 15 February 2026 03:41:22 +0000 (0:00:00.553) 0:00:25.670 ******* 2026-02-15 03:41:27.292399 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:41:27.292403 | orchestrator | 2026-02-15 03:41:27.292408 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-15 03:41:27.292427 | orchestrator | Sunday 15 February 2026 03:41:23 +0000 (0:00:00.152) 0:00:25.823 ******* 2026-02-15 03:41:27.292432 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'vg_name': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'}) 2026-02-15 03:41:27.292438 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'vg_name': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'}) 2026-02-15 03:41:27.292443 | orchestrator | 2026-02-15 03:41:27.292448 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-15 03:41:27.292452 | orchestrator | Sunday 15 February 2026 03:41:23 +0000 (0:00:00.211) 0:00:26.034 ******* 2026-02-15 03:41:27.292457 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:27.292461 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:27.292466 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:27.292471 | orchestrator | 2026-02-15 03:41:27.292475 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-15 03:41:27.292480 | orchestrator | Sunday 15 February 2026 03:41:23 +0000 (0:00:00.388) 0:00:26.423 ******* 2026-02-15 03:41:27.292484 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:27.292489 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:27.292493 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:27.292498 | orchestrator | 2026-02-15 03:41:27.292502 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-15 03:41:27.292548 | orchestrator | Sunday 15 February 2026 03:41:23 +0000 (0:00:00.173) 0:00:26.596 ******* 2026-02-15 03:41:27.292553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 03:41:27.292558 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 03:41:27.292562 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:41:27.292567 | orchestrator | 2026-02-15 03:41:27.292571 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-15 03:41:27.292576 | orchestrator | Sunday 15 February 2026 03:41:23 +0000 (0:00:00.167) 0:00:26.764 ******* 2026-02-15 03:41:27.292591 | orchestrator | ok: [testbed-node-3] => { 2026-02-15 03:41:27.292596 | orchestrator |  "lvm_report": { 2026-02-15 03:41:27.292601 | orchestrator |  "lv": [ 2026-02-15 03:41:27.292605 | orchestrator |  { 2026-02-15 03:41:27.292610 | orchestrator |  "lv_name": "osd-block-11907033-e329-56e1-bf1e-182edc1a3769", 2026-02-15 03:41:27.292615 | orchestrator |  "vg_name": "ceph-11907033-e329-56e1-bf1e-182edc1a3769" 2026-02-15 03:41:27.292620 | orchestrator |  }, 2026-02-15 03:41:27.292625 | orchestrator |  { 2026-02-15 03:41:27.292633 | orchestrator |  "lv_name": "osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55", 2026-02-15 03:41:27.292638 | orchestrator |  "vg_name": "ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55" 2026-02-15 03:41:27.292642 | orchestrator |  } 2026-02-15 03:41:27.292647 | orchestrator |  ], 2026-02-15 03:41:27.292651 | orchestrator |  "pv": [ 2026-02-15 03:41:27.292656 | orchestrator |  { 2026-02-15 03:41:27.292660 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-15 03:41:27.292665 | orchestrator |  "vg_name": "ceph-11907033-e329-56e1-bf1e-182edc1a3769" 2026-02-15 03:41:27.292674 | orchestrator |  }, 2026-02-15 03:41:27.292679 | orchestrator |  { 2026-02-15 03:41:27.292684 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-15 03:41:27.292688 | orchestrator |  "vg_name": "ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55" 2026-02-15 03:41:27.292693 | orchestrator |  } 2026-02-15 03:41:27.292697 | orchestrator |  ] 2026-02-15 03:41:27.292702 | orchestrator |  } 2026-02-15 03:41:27.292707 | orchestrator | } 2026-02-15 03:41:27.292712 | orchestrator | 2026-02-15 03:41:27.292717 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-15 03:41:27.292721 | orchestrator | 2026-02-15 03:41:27.292726 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-15 03:41:27.292730 | orchestrator | Sunday 15 February 2026 03:41:24 +0000 (0:00:00.344) 0:00:27.108 ******* 2026-02-15 03:41:27.292735 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-02-15 03:41:27.292740 | orchestrator | 2026-02-15 03:41:27.292745 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-15 03:41:27.292750 | orchestrator | Sunday 15 February 2026 03:41:24 +0000 (0:00:00.324) 0:00:27.433 ******* 2026-02-15 03:41:27.292755 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:41:27.292760 | orchestrator | 2026-02-15 03:41:27.292766 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:27.292771 | orchestrator | Sunday 15 February 2026 03:41:24 +0000 (0:00:00.272) 0:00:27.705 ******* 2026-02-15 03:41:27.292776 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-02-15 03:41:27.292782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-02-15 03:41:27.292787 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-02-15 03:41:27.292792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-02-15 03:41:27.292797 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-02-15 03:41:27.292802 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-02-15 03:41:27.292807 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-02-15 03:41:27.292813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-02-15 03:41:27.292821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-02-15 03:41:27.292828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-02-15 03:41:27.292835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-02-15 03:41:27.292843 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-02-15 03:41:27.292850 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-02-15 03:41:27.292859 | orchestrator | 2026-02-15 03:41:27.292866 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:27.292874 | orchestrator | Sunday 15 February 2026 03:41:25 +0000 (0:00:00.493) 0:00:28.199 ******* 2026-02-15 03:41:27.292881 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:27.292889 | orchestrator | 2026-02-15 03:41:27.292895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:27.292900 | orchestrator | Sunday 15 February 2026 03:41:25 +0000 (0:00:00.219) 0:00:28.419 ******* 2026-02-15 03:41:27.292904 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:27.292909 | orchestrator | 2026-02-15 03:41:27.292914 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:27.292925 | orchestrator | Sunday 15 February 2026 03:41:26 +0000 (0:00:00.698) 0:00:29.117 ******* 2026-02-15 03:41:27.292930 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:27.292939 | orchestrator | 2026-02-15 03:41:27.292944 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:27.292948 | orchestrator | Sunday 15 February 2026 03:41:26 +0000 (0:00:00.247) 0:00:29.364 ******* 2026-02-15 03:41:27.292953 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:27.292957 | orchestrator | 2026-02-15 03:41:27.292962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:27.292967 | orchestrator | Sunday 15 February 2026 03:41:26 +0000 (0:00:00.231) 0:00:29.596 ******* 2026-02-15 03:41:27.292971 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:27.292976 | orchestrator | 2026-02-15 03:41:27.292980 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:27.292985 | orchestrator | Sunday 15 February 2026 03:41:27 +0000 (0:00:00.250) 0:00:29.846 ******* 2026-02-15 03:41:27.292990 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:27.292994 | orchestrator | 2026-02-15 03:41:27.293003 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:39.355810 | orchestrator | Sunday 15 February 2026 03:41:27 +0000 (0:00:00.234) 0:00:30.080 ******* 2026-02-15 03:41:39.355928 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.355945 | orchestrator | 2026-02-15 03:41:39.355959 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:39.355971 | orchestrator | Sunday 15 February 2026 03:41:27 +0000 (0:00:00.244) 0:00:30.325 ******* 2026-02-15 03:41:39.356000 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.356012 | orchestrator | 2026-02-15 03:41:39.356023 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:39.356035 | orchestrator | Sunday 15 February 2026 03:41:27 +0000 (0:00:00.223) 0:00:30.549 ******* 2026-02-15 03:41:39.356046 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006) 2026-02-15 03:41:39.356065 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006) 2026-02-15 03:41:39.356084 | orchestrator | 2026-02-15 03:41:39.356104 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:39.356124 | orchestrator | Sunday 15 February 2026 03:41:28 +0000 (0:00:00.457) 0:00:31.006 ******* 2026-02-15 03:41:39.356142 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57) 2026-02-15 03:41:39.356162 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57) 2026-02-15 03:41:39.356174 | orchestrator | 2026-02-15 03:41:39.356185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:39.356198 | orchestrator | Sunday 15 February 2026 03:41:28 +0000 (0:00:00.480) 0:00:31.486 ******* 2026-02-15 03:41:39.356216 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0) 2026-02-15 03:41:39.356242 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0) 2026-02-15 03:41:39.356264 | orchestrator | 2026-02-15 03:41:39.356282 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:39.356300 | orchestrator | Sunday 15 February 2026 03:41:29 +0000 (0:00:00.825) 0:00:32.311 ******* 2026-02-15 03:41:39.356317 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7) 2026-02-15 03:41:39.356334 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7) 2026-02-15 03:41:39.356352 | orchestrator | 2026-02-15 03:41:39.356371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:39.356389 | orchestrator | Sunday 15 February 2026 03:41:30 +0000 (0:00:01.026) 0:00:33.338 ******* 2026-02-15 03:41:39.356409 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-15 03:41:39.356428 | orchestrator | 2026-02-15 03:41:39.356448 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.356496 | orchestrator | Sunday 15 February 2026 03:41:30 +0000 (0:00:00.387) 0:00:33.726 ******* 2026-02-15 03:41:39.356546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-02-15 03:41:39.356565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-02-15 03:41:39.356583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-02-15 03:41:39.356602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-02-15 03:41:39.356621 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-02-15 03:41:39.356640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-02-15 03:41:39.356657 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-02-15 03:41:39.356677 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-02-15 03:41:39.356689 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-02-15 03:41:39.356700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-02-15 03:41:39.356711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-02-15 03:41:39.356722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-02-15 03:41:39.356733 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-02-15 03:41:39.356743 | orchestrator | 2026-02-15 03:41:39.356755 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.356766 | orchestrator | Sunday 15 February 2026 03:41:31 +0000 (0:00:00.486) 0:00:34.212 ******* 2026-02-15 03:41:39.356776 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.356787 | orchestrator | 2026-02-15 03:41:39.356798 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.356809 | orchestrator | Sunday 15 February 2026 03:41:31 +0000 (0:00:00.292) 0:00:34.505 ******* 2026-02-15 03:41:39.356819 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.356830 | orchestrator | 2026-02-15 03:41:39.356841 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.356852 | orchestrator | Sunday 15 February 2026 03:41:31 +0000 (0:00:00.214) 0:00:34.719 ******* 2026-02-15 03:41:39.356878 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.356890 | orchestrator | 2026-02-15 03:41:39.356936 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.356948 | orchestrator | Sunday 15 February 2026 03:41:32 +0000 (0:00:00.224) 0:00:34.944 ******* 2026-02-15 03:41:39.356959 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.356970 | orchestrator | 2026-02-15 03:41:39.356980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.357000 | orchestrator | Sunday 15 February 2026 03:41:32 +0000 (0:00:00.249) 0:00:35.193 ******* 2026-02-15 03:41:39.357012 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.357023 | orchestrator | 2026-02-15 03:41:39.357034 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.357045 | orchestrator | Sunday 15 February 2026 03:41:32 +0000 (0:00:00.234) 0:00:35.428 ******* 2026-02-15 03:41:39.357055 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.357066 | orchestrator | 2026-02-15 03:41:39.357077 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.357088 | orchestrator | Sunday 15 February 2026 03:41:32 +0000 (0:00:00.244) 0:00:35.672 ******* 2026-02-15 03:41:39.357098 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.357109 | orchestrator | 2026-02-15 03:41:39.357120 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.357141 | orchestrator | Sunday 15 February 2026 03:41:33 +0000 (0:00:00.221) 0:00:35.893 ******* 2026-02-15 03:41:39.357151 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.357162 | orchestrator | 2026-02-15 03:41:39.357173 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.357184 | orchestrator | Sunday 15 February 2026 03:41:33 +0000 (0:00:00.726) 0:00:36.620 ******* 2026-02-15 03:41:39.357194 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-02-15 03:41:39.357205 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-02-15 03:41:39.357216 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-02-15 03:41:39.357227 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-02-15 03:41:39.357237 | orchestrator | 2026-02-15 03:41:39.357248 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.357259 | orchestrator | Sunday 15 February 2026 03:41:34 +0000 (0:00:00.788) 0:00:37.408 ******* 2026-02-15 03:41:39.357270 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.357280 | orchestrator | 2026-02-15 03:41:39.357291 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.357302 | orchestrator | Sunday 15 February 2026 03:41:34 +0000 (0:00:00.256) 0:00:37.665 ******* 2026-02-15 03:41:39.357312 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.357323 | orchestrator | 2026-02-15 03:41:39.357334 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.357345 | orchestrator | Sunday 15 February 2026 03:41:35 +0000 (0:00:00.238) 0:00:37.904 ******* 2026-02-15 03:41:39.357356 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.357366 | orchestrator | 2026-02-15 03:41:39.357378 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:39.357388 | orchestrator | Sunday 15 February 2026 03:41:35 +0000 (0:00:00.242) 0:00:38.146 ******* 2026-02-15 03:41:39.357399 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.357410 | orchestrator | 2026-02-15 03:41:39.357421 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-15 03:41:39.357432 | orchestrator | Sunday 15 February 2026 03:41:35 +0000 (0:00:00.249) 0:00:38.395 ******* 2026-02-15 03:41:39.357442 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.357453 | orchestrator | 2026-02-15 03:41:39.357464 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-15 03:41:39.357475 | orchestrator | Sunday 15 February 2026 03:41:35 +0000 (0:00:00.159) 0:00:38.555 ******* 2026-02-15 03:41:39.357486 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '85fe8ada-5694-5853-9626-8b4c90604800'}}) 2026-02-15 03:41:39.357497 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '12f88160-c11a-5ad6-adc7-3b0cfe47daee'}}) 2026-02-15 03:41:39.357548 | orchestrator | 2026-02-15 03:41:39.357561 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-15 03:41:39.357572 | orchestrator | Sunday 15 February 2026 03:41:35 +0000 (0:00:00.216) 0:00:38.771 ******* 2026-02-15 03:41:39.357584 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'}) 2026-02-15 03:41:39.357595 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'}) 2026-02-15 03:41:39.357606 | orchestrator | 2026-02-15 03:41:39.357617 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-15 03:41:39.357628 | orchestrator | Sunday 15 February 2026 03:41:37 +0000 (0:00:01.885) 0:00:40.656 ******* 2026-02-15 03:41:39.357639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:39.357652 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:39.357670 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:39.357681 | orchestrator | 2026-02-15 03:41:39.357692 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-15 03:41:39.357703 | orchestrator | Sunday 15 February 2026 03:41:38 +0000 (0:00:00.175) 0:00:40.832 ******* 2026-02-15 03:41:39.357714 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'}) 2026-02-15 03:41:39.357732 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'}) 2026-02-15 03:41:45.755629 | orchestrator | 2026-02-15 03:41:45.755803 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-15 03:41:45.755855 | orchestrator | Sunday 15 February 2026 03:41:39 +0000 (0:00:01.308) 0:00:42.141 ******* 2026-02-15 03:41:45.755875 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:45.755895 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:45.755914 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.755935 | orchestrator | 2026-02-15 03:41:45.755955 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-15 03:41:45.755974 | orchestrator | Sunday 15 February 2026 03:41:39 +0000 (0:00:00.455) 0:00:42.596 ******* 2026-02-15 03:41:45.755994 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756014 | orchestrator | 2026-02-15 03:41:45.756032 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-15 03:41:45.756051 | orchestrator | Sunday 15 February 2026 03:41:39 +0000 (0:00:00.157) 0:00:42.754 ******* 2026-02-15 03:41:45.756068 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:45.756087 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:45.756106 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756122 | orchestrator | 2026-02-15 03:41:45.756140 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-15 03:41:45.756158 | orchestrator | Sunday 15 February 2026 03:41:40 +0000 (0:00:00.166) 0:00:42.920 ******* 2026-02-15 03:41:45.756177 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756196 | orchestrator | 2026-02-15 03:41:45.756215 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-15 03:41:45.756236 | orchestrator | Sunday 15 February 2026 03:41:40 +0000 (0:00:00.148) 0:00:43.068 ******* 2026-02-15 03:41:45.756257 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:45.756277 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:45.756292 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756304 | orchestrator | 2026-02-15 03:41:45.756314 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-15 03:41:45.756325 | orchestrator | Sunday 15 February 2026 03:41:40 +0000 (0:00:00.185) 0:00:43.254 ******* 2026-02-15 03:41:45.756336 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756347 | orchestrator | 2026-02-15 03:41:45.756358 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-15 03:41:45.756368 | orchestrator | Sunday 15 February 2026 03:41:40 +0000 (0:00:00.210) 0:00:43.464 ******* 2026-02-15 03:41:45.756401 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:45.756413 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:45.756424 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756435 | orchestrator | 2026-02-15 03:41:45.756446 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-15 03:41:45.756457 | orchestrator | Sunday 15 February 2026 03:41:40 +0000 (0:00:00.179) 0:00:43.644 ******* 2026-02-15 03:41:45.756468 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:41:45.756479 | orchestrator | 2026-02-15 03:41:45.756490 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-15 03:41:45.756501 | orchestrator | Sunday 15 February 2026 03:41:41 +0000 (0:00:00.185) 0:00:43.830 ******* 2026-02-15 03:41:45.756539 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:45.756550 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:45.756561 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756572 | orchestrator | 2026-02-15 03:41:45.756583 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-15 03:41:45.756594 | orchestrator | Sunday 15 February 2026 03:41:41 +0000 (0:00:00.173) 0:00:44.003 ******* 2026-02-15 03:41:45.756605 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:45.756616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:45.756627 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756638 | orchestrator | 2026-02-15 03:41:45.756649 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-15 03:41:45.756681 | orchestrator | Sunday 15 February 2026 03:41:41 +0000 (0:00:00.170) 0:00:44.174 ******* 2026-02-15 03:41:45.756700 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:45.756712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:45.756723 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756733 | orchestrator | 2026-02-15 03:41:45.756744 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-15 03:41:45.756755 | orchestrator | Sunday 15 February 2026 03:41:41 +0000 (0:00:00.165) 0:00:44.339 ******* 2026-02-15 03:41:45.756766 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756777 | orchestrator | 2026-02-15 03:41:45.756788 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-15 03:41:45.756798 | orchestrator | Sunday 15 February 2026 03:41:41 +0000 (0:00:00.392) 0:00:44.732 ******* 2026-02-15 03:41:45.756809 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756820 | orchestrator | 2026-02-15 03:41:45.756831 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-15 03:41:45.756842 | orchestrator | Sunday 15 February 2026 03:41:42 +0000 (0:00:00.164) 0:00:44.896 ******* 2026-02-15 03:41:45.756853 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.756863 | orchestrator | 2026-02-15 03:41:45.756874 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-15 03:41:45.756885 | orchestrator | Sunday 15 February 2026 03:41:42 +0000 (0:00:00.158) 0:00:45.054 ******* 2026-02-15 03:41:45.756903 | orchestrator | ok: [testbed-node-4] => { 2026-02-15 03:41:45.756914 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-15 03:41:45.756925 | orchestrator | } 2026-02-15 03:41:45.756937 | orchestrator | 2026-02-15 03:41:45.756948 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-15 03:41:45.756959 | orchestrator | Sunday 15 February 2026 03:41:42 +0000 (0:00:00.162) 0:00:45.217 ******* 2026-02-15 03:41:45.756969 | orchestrator | ok: [testbed-node-4] => { 2026-02-15 03:41:45.756980 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-15 03:41:45.756993 | orchestrator | } 2026-02-15 03:41:45.757011 | orchestrator | 2026-02-15 03:41:45.757029 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-15 03:41:45.757048 | orchestrator | Sunday 15 February 2026 03:41:42 +0000 (0:00:00.183) 0:00:45.401 ******* 2026-02-15 03:41:45.757066 | orchestrator | ok: [testbed-node-4] => { 2026-02-15 03:41:45.757085 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-15 03:41:45.757102 | orchestrator | } 2026-02-15 03:41:45.757121 | orchestrator | 2026-02-15 03:41:45.757140 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-15 03:41:45.757159 | orchestrator | Sunday 15 February 2026 03:41:42 +0000 (0:00:00.159) 0:00:45.560 ******* 2026-02-15 03:41:45.757179 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:41:45.757198 | orchestrator | 2026-02-15 03:41:45.757216 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-15 03:41:45.757236 | orchestrator | Sunday 15 February 2026 03:41:43 +0000 (0:00:00.555) 0:00:46.116 ******* 2026-02-15 03:41:45.757255 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:41:45.757274 | orchestrator | 2026-02-15 03:41:45.757291 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-15 03:41:45.757303 | orchestrator | Sunday 15 February 2026 03:41:43 +0000 (0:00:00.538) 0:00:46.655 ******* 2026-02-15 03:41:45.757314 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:41:45.757324 | orchestrator | 2026-02-15 03:41:45.757335 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-15 03:41:45.757346 | orchestrator | Sunday 15 February 2026 03:41:44 +0000 (0:00:00.500) 0:00:47.155 ******* 2026-02-15 03:41:45.757357 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:41:45.757367 | orchestrator | 2026-02-15 03:41:45.757379 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-15 03:41:45.757390 | orchestrator | Sunday 15 February 2026 03:41:44 +0000 (0:00:00.162) 0:00:47.318 ******* 2026-02-15 03:41:45.757401 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.757411 | orchestrator | 2026-02-15 03:41:45.757423 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-15 03:41:45.757434 | orchestrator | Sunday 15 February 2026 03:41:44 +0000 (0:00:00.114) 0:00:47.432 ******* 2026-02-15 03:41:45.757445 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.757455 | orchestrator | 2026-02-15 03:41:45.757466 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-15 03:41:45.757477 | orchestrator | Sunday 15 February 2026 03:41:44 +0000 (0:00:00.358) 0:00:47.791 ******* 2026-02-15 03:41:45.757488 | orchestrator | ok: [testbed-node-4] => { 2026-02-15 03:41:45.757499 | orchestrator |  "vgs_report": { 2026-02-15 03:41:45.757540 | orchestrator |  "vg": [] 2026-02-15 03:41:45.757552 | orchestrator |  } 2026-02-15 03:41:45.757563 | orchestrator | } 2026-02-15 03:41:45.757574 | orchestrator | 2026-02-15 03:41:45.757586 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-15 03:41:45.757597 | orchestrator | Sunday 15 February 2026 03:41:45 +0000 (0:00:00.167) 0:00:47.958 ******* 2026-02-15 03:41:45.757607 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.757618 | orchestrator | 2026-02-15 03:41:45.757629 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-15 03:41:45.757640 | orchestrator | Sunday 15 February 2026 03:41:45 +0000 (0:00:00.154) 0:00:48.113 ******* 2026-02-15 03:41:45.757651 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.757671 | orchestrator | 2026-02-15 03:41:45.757682 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-15 03:41:45.757693 | orchestrator | Sunday 15 February 2026 03:41:45 +0000 (0:00:00.150) 0:00:48.264 ******* 2026-02-15 03:41:45.757704 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.757715 | orchestrator | 2026-02-15 03:41:45.757726 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-15 03:41:45.757737 | orchestrator | Sunday 15 February 2026 03:41:45 +0000 (0:00:00.131) 0:00:48.395 ******* 2026-02-15 03:41:45.757748 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:45.757759 | orchestrator | 2026-02-15 03:41:45.757778 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-15 03:41:50.977832 | orchestrator | Sunday 15 February 2026 03:41:45 +0000 (0:00:00.146) 0:00:48.542 ******* 2026-02-15 03:41:50.977967 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.977990 | orchestrator | 2026-02-15 03:41:50.978005 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-15 03:41:50.978073 | orchestrator | Sunday 15 February 2026 03:41:45 +0000 (0:00:00.158) 0:00:48.701 ******* 2026-02-15 03:41:50.978089 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978097 | orchestrator | 2026-02-15 03:41:50.978104 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-15 03:41:50.978111 | orchestrator | Sunday 15 February 2026 03:41:46 +0000 (0:00:00.145) 0:00:48.846 ******* 2026-02-15 03:41:50.978118 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978125 | orchestrator | 2026-02-15 03:41:50.978132 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-15 03:41:50.978139 | orchestrator | Sunday 15 February 2026 03:41:46 +0000 (0:00:00.150) 0:00:48.997 ******* 2026-02-15 03:41:50.978146 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978153 | orchestrator | 2026-02-15 03:41:50.978160 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-15 03:41:50.978167 | orchestrator | Sunday 15 February 2026 03:41:46 +0000 (0:00:00.150) 0:00:49.147 ******* 2026-02-15 03:41:50.978174 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978181 | orchestrator | 2026-02-15 03:41:50.978187 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-15 03:41:50.978194 | orchestrator | Sunday 15 February 2026 03:41:46 +0000 (0:00:00.145) 0:00:49.293 ******* 2026-02-15 03:41:50.978202 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978209 | orchestrator | 2026-02-15 03:41:50.978216 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-15 03:41:50.978223 | orchestrator | Sunday 15 February 2026 03:41:46 +0000 (0:00:00.379) 0:00:49.673 ******* 2026-02-15 03:41:50.978230 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978236 | orchestrator | 2026-02-15 03:41:50.978243 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-15 03:41:50.978250 | orchestrator | Sunday 15 February 2026 03:41:47 +0000 (0:00:00.158) 0:00:49.831 ******* 2026-02-15 03:41:50.978257 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978264 | orchestrator | 2026-02-15 03:41:50.978271 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-15 03:41:50.978278 | orchestrator | Sunday 15 February 2026 03:41:47 +0000 (0:00:00.155) 0:00:49.986 ******* 2026-02-15 03:41:50.978284 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978291 | orchestrator | 2026-02-15 03:41:50.978298 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-15 03:41:50.978305 | orchestrator | Sunday 15 February 2026 03:41:47 +0000 (0:00:00.151) 0:00:50.138 ******* 2026-02-15 03:41:50.978312 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978319 | orchestrator | 2026-02-15 03:41:50.978326 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-15 03:41:50.978333 | orchestrator | Sunday 15 February 2026 03:41:47 +0000 (0:00:00.168) 0:00:50.306 ******* 2026-02-15 03:41:50.978342 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:50.978370 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:50.978378 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978388 | orchestrator | 2026-02-15 03:41:50.978399 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-15 03:41:50.978409 | orchestrator | Sunday 15 February 2026 03:41:47 +0000 (0:00:00.178) 0:00:50.485 ******* 2026-02-15 03:41:50.978427 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:50.978440 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:50.978451 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978462 | orchestrator | 2026-02-15 03:41:50.978473 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-15 03:41:50.978484 | orchestrator | Sunday 15 February 2026 03:41:47 +0000 (0:00:00.180) 0:00:50.666 ******* 2026-02-15 03:41:50.978494 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:50.978565 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:50.978582 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978594 | orchestrator | 2026-02-15 03:41:50.978604 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-15 03:41:50.978612 | orchestrator | Sunday 15 February 2026 03:41:48 +0000 (0:00:00.172) 0:00:50.838 ******* 2026-02-15 03:41:50.978628 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:50.978636 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:50.978644 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978652 | orchestrator | 2026-02-15 03:41:50.978679 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-15 03:41:50.978693 | orchestrator | Sunday 15 February 2026 03:41:48 +0000 (0:00:00.151) 0:00:50.990 ******* 2026-02-15 03:41:50.978701 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:50.978708 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:50.978718 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978729 | orchestrator | 2026-02-15 03:41:50.978741 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-15 03:41:50.978751 | orchestrator | Sunday 15 February 2026 03:41:48 +0000 (0:00:00.181) 0:00:51.172 ******* 2026-02-15 03:41:50.978763 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:50.978774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:50.978785 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978796 | orchestrator | 2026-02-15 03:41:50.978808 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-15 03:41:50.978820 | orchestrator | Sunday 15 February 2026 03:41:48 +0000 (0:00:00.172) 0:00:51.344 ******* 2026-02-15 03:41:50.978845 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:50.978858 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:50.978865 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978872 | orchestrator | 2026-02-15 03:41:50.978879 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-15 03:41:50.978886 | orchestrator | Sunday 15 February 2026 03:41:48 +0000 (0:00:00.394) 0:00:51.739 ******* 2026-02-15 03:41:50.978893 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:50.978899 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:50.978906 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.978913 | orchestrator | 2026-02-15 03:41:50.978920 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-15 03:41:50.978927 | orchestrator | Sunday 15 February 2026 03:41:49 +0000 (0:00:00.200) 0:00:51.939 ******* 2026-02-15 03:41:50.978934 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:41:50.978941 | orchestrator | 2026-02-15 03:41:50.978948 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-15 03:41:50.978954 | orchestrator | Sunday 15 February 2026 03:41:49 +0000 (0:00:00.548) 0:00:52.488 ******* 2026-02-15 03:41:50.978961 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:41:50.978968 | orchestrator | 2026-02-15 03:41:50.978975 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-15 03:41:50.978982 | orchestrator | Sunday 15 February 2026 03:41:50 +0000 (0:00:00.543) 0:00:53.031 ******* 2026-02-15 03:41:50.978988 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:41:50.978995 | orchestrator | 2026-02-15 03:41:50.979002 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-15 03:41:50.979009 | orchestrator | Sunday 15 February 2026 03:41:50 +0000 (0:00:00.170) 0:00:53.202 ******* 2026-02-15 03:41:50.979016 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'vg_name': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'}) 2026-02-15 03:41:50.979024 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'vg_name': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'}) 2026-02-15 03:41:50.979031 | orchestrator | 2026-02-15 03:41:50.979038 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-15 03:41:50.979045 | orchestrator | Sunday 15 February 2026 03:41:50 +0000 (0:00:00.211) 0:00:53.413 ******* 2026-02-15 03:41:50.979052 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:50.979059 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:50.979065 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:50.979072 | orchestrator | 2026-02-15 03:41:50.979079 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-15 03:41:50.979086 | orchestrator | Sunday 15 February 2026 03:41:50 +0000 (0:00:00.179) 0:00:53.593 ******* 2026-02-15 03:41:50.979093 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:50.979108 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:58.222219 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:58.222338 | orchestrator | 2026-02-15 03:41:58.222355 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-15 03:41:58.222369 | orchestrator | Sunday 15 February 2026 03:41:50 +0000 (0:00:00.168) 0:00:53.762 ******* 2026-02-15 03:41:58.222381 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 03:41:58.222394 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 03:41:58.222405 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:41:58.222416 | orchestrator | 2026-02-15 03:41:58.222428 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-15 03:41:58.222439 | orchestrator | Sunday 15 February 2026 03:41:51 +0000 (0:00:00.176) 0:00:53.938 ******* 2026-02-15 03:41:58.222450 | orchestrator | ok: [testbed-node-4] => { 2026-02-15 03:41:58.222461 | orchestrator |  "lvm_report": { 2026-02-15 03:41:58.222474 | orchestrator |  "lv": [ 2026-02-15 03:41:58.222485 | orchestrator |  { 2026-02-15 03:41:58.222496 | orchestrator |  "lv_name": "osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee", 2026-02-15 03:41:58.222559 | orchestrator |  "vg_name": "ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee" 2026-02-15 03:41:58.222573 | orchestrator |  }, 2026-02-15 03:41:58.222584 | orchestrator |  { 2026-02-15 03:41:58.222595 | orchestrator |  "lv_name": "osd-block-85fe8ada-5694-5853-9626-8b4c90604800", 2026-02-15 03:41:58.222606 | orchestrator |  "vg_name": "ceph-85fe8ada-5694-5853-9626-8b4c90604800" 2026-02-15 03:41:58.222617 | orchestrator |  } 2026-02-15 03:41:58.222628 | orchestrator |  ], 2026-02-15 03:41:58.222639 | orchestrator |  "pv": [ 2026-02-15 03:41:58.222653 | orchestrator |  { 2026-02-15 03:41:58.222665 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-15 03:41:58.222678 | orchestrator |  "vg_name": "ceph-85fe8ada-5694-5853-9626-8b4c90604800" 2026-02-15 03:41:58.222691 | orchestrator |  }, 2026-02-15 03:41:58.222703 | orchestrator |  { 2026-02-15 03:41:58.222716 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-15 03:41:58.222729 | orchestrator |  "vg_name": "ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee" 2026-02-15 03:41:58.222742 | orchestrator |  } 2026-02-15 03:41:58.222754 | orchestrator |  ] 2026-02-15 03:41:58.222766 | orchestrator |  } 2026-02-15 03:41:58.222779 | orchestrator | } 2026-02-15 03:41:58.222792 | orchestrator | 2026-02-15 03:41:58.222805 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-02-15 03:41:58.222818 | orchestrator | 2026-02-15 03:41:58.222830 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-15 03:41:58.222843 | orchestrator | Sunday 15 February 2026 03:41:51 +0000 (0:00:00.352) 0:00:54.290 ******* 2026-02-15 03:41:58.222855 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-02-15 03:41:58.222873 | orchestrator | 2026-02-15 03:41:58.222894 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-02-15 03:41:58.222913 | orchestrator | Sunday 15 February 2026 03:41:52 +0000 (0:00:00.754) 0:00:55.045 ******* 2026-02-15 03:41:58.222931 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:41:58.222951 | orchestrator | 2026-02-15 03:41:58.222969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.222989 | orchestrator | Sunday 15 February 2026 03:41:52 +0000 (0:00:00.276) 0:00:55.322 ******* 2026-02-15 03:41:58.223009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-02-15 03:41:58.223028 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-02-15 03:41:58.223050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-02-15 03:41:58.223097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-02-15 03:41:58.223113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-02-15 03:41:58.223124 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-02-15 03:41:58.223135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-02-15 03:41:58.223146 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-02-15 03:41:58.223156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-02-15 03:41:58.223167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-02-15 03:41:58.223178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-02-15 03:41:58.223189 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-02-15 03:41:58.223199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-02-15 03:41:58.223210 | orchestrator | 2026-02-15 03:41:58.223224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223243 | orchestrator | Sunday 15 February 2026 03:41:52 +0000 (0:00:00.439) 0:00:55.762 ******* 2026-02-15 03:41:58.223261 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:41:58.223279 | orchestrator | 2026-02-15 03:41:58.223297 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223313 | orchestrator | Sunday 15 February 2026 03:41:53 +0000 (0:00:00.219) 0:00:55.981 ******* 2026-02-15 03:41:58.223331 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:41:58.223351 | orchestrator | 2026-02-15 03:41:58.223369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223416 | orchestrator | Sunday 15 February 2026 03:41:53 +0000 (0:00:00.231) 0:00:56.212 ******* 2026-02-15 03:41:58.223432 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:41:58.223443 | orchestrator | 2026-02-15 03:41:58.223454 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223465 | orchestrator | Sunday 15 February 2026 03:41:53 +0000 (0:00:00.225) 0:00:56.437 ******* 2026-02-15 03:41:58.223476 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:41:58.223487 | orchestrator | 2026-02-15 03:41:58.223499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223579 | orchestrator | Sunday 15 February 2026 03:41:53 +0000 (0:00:00.240) 0:00:56.678 ******* 2026-02-15 03:41:58.223593 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:41:58.223604 | orchestrator | 2026-02-15 03:41:58.223615 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223626 | orchestrator | Sunday 15 February 2026 03:41:54 +0000 (0:00:00.234) 0:00:56.913 ******* 2026-02-15 03:41:58.223637 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:41:58.223648 | orchestrator | 2026-02-15 03:41:58.223659 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223669 | orchestrator | Sunday 15 February 2026 03:41:54 +0000 (0:00:00.224) 0:00:57.138 ******* 2026-02-15 03:41:58.223680 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:41:58.223691 | orchestrator | 2026-02-15 03:41:58.223702 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223713 | orchestrator | Sunday 15 February 2026 03:41:54 +0000 (0:00:00.225) 0:00:57.363 ******* 2026-02-15 03:41:58.223724 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:41:58.223734 | orchestrator | 2026-02-15 03:41:58.223746 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223757 | orchestrator | Sunday 15 February 2026 03:41:55 +0000 (0:00:00.776) 0:00:58.139 ******* 2026-02-15 03:41:58.223768 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd) 2026-02-15 03:41:58.223793 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd) 2026-02-15 03:41:58.223805 | orchestrator | 2026-02-15 03:41:58.223816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223827 | orchestrator | Sunday 15 February 2026 03:41:55 +0000 (0:00:00.556) 0:00:58.696 ******* 2026-02-15 03:41:58.223875 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2) 2026-02-15 03:41:58.223887 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2) 2026-02-15 03:41:58.223898 | orchestrator | 2026-02-15 03:41:58.223909 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223920 | orchestrator | Sunday 15 February 2026 03:41:56 +0000 (0:00:00.491) 0:00:59.188 ******* 2026-02-15 03:41:58.223931 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58) 2026-02-15 03:41:58.223942 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58) 2026-02-15 03:41:58.223954 | orchestrator | 2026-02-15 03:41:58.223966 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.223987 | orchestrator | Sunday 15 February 2026 03:41:56 +0000 (0:00:00.459) 0:00:59.647 ******* 2026-02-15 03:41:58.224006 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f) 2026-02-15 03:41:58.224025 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f) 2026-02-15 03:41:58.224045 | orchestrator | 2026-02-15 03:41:58.224065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-02-15 03:41:58.224085 | orchestrator | Sunday 15 February 2026 03:41:57 +0000 (0:00:00.475) 0:01:00.123 ******* 2026-02-15 03:41:58.224106 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-02-15 03:41:58.224126 | orchestrator | 2026-02-15 03:41:58.224145 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:41:58.224164 | orchestrator | Sunday 15 February 2026 03:41:57 +0000 (0:00:00.391) 0:01:00.514 ******* 2026-02-15 03:41:58.224183 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-02-15 03:41:58.224203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-02-15 03:41:58.224223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-02-15 03:41:58.224241 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-02-15 03:41:58.224253 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-02-15 03:41:58.224263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-02-15 03:41:58.224274 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-02-15 03:41:58.224285 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-02-15 03:41:58.224295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-02-15 03:41:58.224306 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-02-15 03:41:58.224317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-02-15 03:41:58.224347 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-02-15 03:42:07.905501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-02-15 03:42:07.905660 | orchestrator | 2026-02-15 03:42:07.905678 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.905712 | orchestrator | Sunday 15 February 2026 03:41:58 +0000 (0:00:00.490) 0:01:01.004 ******* 2026-02-15 03:42:07.905724 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.905737 | orchestrator | 2026-02-15 03:42:07.905748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.905759 | orchestrator | Sunday 15 February 2026 03:41:58 +0000 (0:00:00.261) 0:01:01.266 ******* 2026-02-15 03:42:07.905770 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.905781 | orchestrator | 2026-02-15 03:42:07.905792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.905804 | orchestrator | Sunday 15 February 2026 03:41:58 +0000 (0:00:00.232) 0:01:01.498 ******* 2026-02-15 03:42:07.905814 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.905825 | orchestrator | 2026-02-15 03:42:07.905836 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.905847 | orchestrator | Sunday 15 February 2026 03:41:58 +0000 (0:00:00.266) 0:01:01.765 ******* 2026-02-15 03:42:07.905859 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.905869 | orchestrator | 2026-02-15 03:42:07.905880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.905891 | orchestrator | Sunday 15 February 2026 03:41:59 +0000 (0:00:00.227) 0:01:01.993 ******* 2026-02-15 03:42:07.905902 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.905913 | orchestrator | 2026-02-15 03:42:07.905924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.905935 | orchestrator | Sunday 15 February 2026 03:41:59 +0000 (0:00:00.747) 0:01:02.741 ******* 2026-02-15 03:42:07.905946 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.905957 | orchestrator | 2026-02-15 03:42:07.905968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.905979 | orchestrator | Sunday 15 February 2026 03:42:00 +0000 (0:00:00.247) 0:01:02.988 ******* 2026-02-15 03:42:07.905990 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906001 | orchestrator | 2026-02-15 03:42:07.906012 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.906095 | orchestrator | Sunday 15 February 2026 03:42:00 +0000 (0:00:00.234) 0:01:03.223 ******* 2026-02-15 03:42:07.906108 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906122 | orchestrator | 2026-02-15 03:42:07.906135 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.906147 | orchestrator | Sunday 15 February 2026 03:42:00 +0000 (0:00:00.211) 0:01:03.434 ******* 2026-02-15 03:42:07.906160 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-02-15 03:42:07.906173 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-02-15 03:42:07.906186 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-02-15 03:42:07.906199 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-02-15 03:42:07.906211 | orchestrator | 2026-02-15 03:42:07.906224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.906236 | orchestrator | Sunday 15 February 2026 03:42:01 +0000 (0:00:00.738) 0:01:04.173 ******* 2026-02-15 03:42:07.906249 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906262 | orchestrator | 2026-02-15 03:42:07.906275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.906288 | orchestrator | Sunday 15 February 2026 03:42:01 +0000 (0:00:00.242) 0:01:04.416 ******* 2026-02-15 03:42:07.906300 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906313 | orchestrator | 2026-02-15 03:42:07.906326 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.906338 | orchestrator | Sunday 15 February 2026 03:42:01 +0000 (0:00:00.251) 0:01:04.667 ******* 2026-02-15 03:42:07.906350 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906363 | orchestrator | 2026-02-15 03:42:07.906376 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-02-15 03:42:07.906399 | orchestrator | Sunday 15 February 2026 03:42:02 +0000 (0:00:00.224) 0:01:04.892 ******* 2026-02-15 03:42:07.906410 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906421 | orchestrator | 2026-02-15 03:42:07.906432 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-02-15 03:42:07.906443 | orchestrator | Sunday 15 February 2026 03:42:02 +0000 (0:00:00.206) 0:01:05.099 ******* 2026-02-15 03:42:07.906454 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906465 | orchestrator | 2026-02-15 03:42:07.906476 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-02-15 03:42:07.906488 | orchestrator | Sunday 15 February 2026 03:42:02 +0000 (0:00:00.171) 0:01:05.270 ******* 2026-02-15 03:42:07.906499 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '37190823-1b54-548e-8f85-c0a5c63b57f9'}}) 2026-02-15 03:42:07.906529 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'fe68aa92-7c5f-5213-9184-27150181e978'}}) 2026-02-15 03:42:07.906541 | orchestrator | 2026-02-15 03:42:07.906552 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-02-15 03:42:07.906564 | orchestrator | Sunday 15 February 2026 03:42:02 +0000 (0:00:00.283) 0:01:05.554 ******* 2026-02-15 03:42:07.906576 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'}) 2026-02-15 03:42:07.906589 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'}) 2026-02-15 03:42:07.906600 | orchestrator | 2026-02-15 03:42:07.906611 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-02-15 03:42:07.906658 | orchestrator | Sunday 15 February 2026 03:42:04 +0000 (0:00:01.861) 0:01:07.415 ******* 2026-02-15 03:42:07.906671 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:07.906684 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:07.906695 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906706 | orchestrator | 2026-02-15 03:42:07.906717 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-02-15 03:42:07.906728 | orchestrator | Sunday 15 February 2026 03:42:05 +0000 (0:00:00.406) 0:01:07.822 ******* 2026-02-15 03:42:07.906739 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'}) 2026-02-15 03:42:07.906750 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'}) 2026-02-15 03:42:07.906761 | orchestrator | 2026-02-15 03:42:07.906772 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-02-15 03:42:07.906783 | orchestrator | Sunday 15 February 2026 03:42:06 +0000 (0:00:01.419) 0:01:09.242 ******* 2026-02-15 03:42:07.906794 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:07.906805 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:07.906816 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906827 | orchestrator | 2026-02-15 03:42:07.906837 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-02-15 03:42:07.906849 | orchestrator | Sunday 15 February 2026 03:42:06 +0000 (0:00:00.181) 0:01:09.423 ******* 2026-02-15 03:42:07.906860 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906870 | orchestrator | 2026-02-15 03:42:07.906889 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-02-15 03:42:07.906900 | orchestrator | Sunday 15 February 2026 03:42:06 +0000 (0:00:00.149) 0:01:09.572 ******* 2026-02-15 03:42:07.906911 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:07.906922 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:07.906933 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906944 | orchestrator | 2026-02-15 03:42:07.906955 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-02-15 03:42:07.906966 | orchestrator | Sunday 15 February 2026 03:42:06 +0000 (0:00:00.177) 0:01:09.750 ******* 2026-02-15 03:42:07.906977 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.906988 | orchestrator | 2026-02-15 03:42:07.906999 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-02-15 03:42:07.907010 | orchestrator | Sunday 15 February 2026 03:42:07 +0000 (0:00:00.156) 0:01:09.907 ******* 2026-02-15 03:42:07.907021 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:07.907032 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:07.907043 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.907054 | orchestrator | 2026-02-15 03:42:07.907065 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-02-15 03:42:07.907076 | orchestrator | Sunday 15 February 2026 03:42:07 +0000 (0:00:00.162) 0:01:10.069 ******* 2026-02-15 03:42:07.907086 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.907097 | orchestrator | 2026-02-15 03:42:07.907108 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-02-15 03:42:07.907119 | orchestrator | Sunday 15 February 2026 03:42:07 +0000 (0:00:00.144) 0:01:10.214 ******* 2026-02-15 03:42:07.907130 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:07.907141 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:07.907152 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:07.907163 | orchestrator | 2026-02-15 03:42:07.907174 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-02-15 03:42:07.907185 | orchestrator | Sunday 15 February 2026 03:42:07 +0000 (0:00:00.151) 0:01:10.365 ******* 2026-02-15 03:42:07.907196 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:42:07.907207 | orchestrator | 2026-02-15 03:42:07.907218 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-02-15 03:42:07.907229 | orchestrator | Sunday 15 February 2026 03:42:07 +0000 (0:00:00.157) 0:01:10.522 ******* 2026-02-15 03:42:07.907251 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:14.939214 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:14.939342 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.939367 | orchestrator | 2026-02-15 03:42:14.939386 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-02-15 03:42:14.939404 | orchestrator | Sunday 15 February 2026 03:42:07 +0000 (0:00:00.169) 0:01:10.692 ******* 2026-02-15 03:42:14.939422 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:14.939481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:14.939500 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.939576 | orchestrator | 2026-02-15 03:42:14.939592 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-02-15 03:42:14.939608 | orchestrator | Sunday 15 February 2026 03:42:08 +0000 (0:00:00.174) 0:01:10.866 ******* 2026-02-15 03:42:14.939623 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:14.939639 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:14.939655 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.939670 | orchestrator | 2026-02-15 03:42:14.939688 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-02-15 03:42:14.939703 | orchestrator | Sunday 15 February 2026 03:42:08 +0000 (0:00:00.427) 0:01:11.294 ******* 2026-02-15 03:42:14.939720 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.939737 | orchestrator | 2026-02-15 03:42:14.939755 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-02-15 03:42:14.939773 | orchestrator | Sunday 15 February 2026 03:42:08 +0000 (0:00:00.170) 0:01:11.465 ******* 2026-02-15 03:42:14.939791 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.939808 | orchestrator | 2026-02-15 03:42:14.939825 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-02-15 03:42:14.939841 | orchestrator | Sunday 15 February 2026 03:42:08 +0000 (0:00:00.145) 0:01:11.610 ******* 2026-02-15 03:42:14.939852 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.939863 | orchestrator | 2026-02-15 03:42:14.939875 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-02-15 03:42:14.939886 | orchestrator | Sunday 15 February 2026 03:42:08 +0000 (0:00:00.158) 0:01:11.768 ******* 2026-02-15 03:42:14.939898 | orchestrator | ok: [testbed-node-5] => { 2026-02-15 03:42:14.939909 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-02-15 03:42:14.939920 | orchestrator | } 2026-02-15 03:42:14.939931 | orchestrator | 2026-02-15 03:42:14.939942 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-02-15 03:42:14.939953 | orchestrator | Sunday 15 February 2026 03:42:09 +0000 (0:00:00.179) 0:01:11.948 ******* 2026-02-15 03:42:14.939964 | orchestrator | ok: [testbed-node-5] => { 2026-02-15 03:42:14.939975 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-02-15 03:42:14.939985 | orchestrator | } 2026-02-15 03:42:14.939995 | orchestrator | 2026-02-15 03:42:14.940005 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-02-15 03:42:14.940015 | orchestrator | Sunday 15 February 2026 03:42:09 +0000 (0:00:00.158) 0:01:12.106 ******* 2026-02-15 03:42:14.940024 | orchestrator | ok: [testbed-node-5] => { 2026-02-15 03:42:14.940034 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-02-15 03:42:14.940043 | orchestrator | } 2026-02-15 03:42:14.940053 | orchestrator | 2026-02-15 03:42:14.940063 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-02-15 03:42:14.940072 | orchestrator | Sunday 15 February 2026 03:42:09 +0000 (0:00:00.188) 0:01:12.295 ******* 2026-02-15 03:42:14.940082 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:42:14.940092 | orchestrator | 2026-02-15 03:42:14.940101 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-02-15 03:42:14.940111 | orchestrator | Sunday 15 February 2026 03:42:10 +0000 (0:00:00.538) 0:01:12.833 ******* 2026-02-15 03:42:14.940120 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:42:14.940130 | orchestrator | 2026-02-15 03:42:14.940140 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-02-15 03:42:14.940149 | orchestrator | Sunday 15 February 2026 03:42:10 +0000 (0:00:00.531) 0:01:13.365 ******* 2026-02-15 03:42:14.940174 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:42:14.940184 | orchestrator | 2026-02-15 03:42:14.940194 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-02-15 03:42:14.940204 | orchestrator | Sunday 15 February 2026 03:42:11 +0000 (0:00:00.524) 0:01:13.890 ******* 2026-02-15 03:42:14.940211 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:42:14.940219 | orchestrator | 2026-02-15 03:42:14.940227 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-02-15 03:42:14.940235 | orchestrator | Sunday 15 February 2026 03:42:11 +0000 (0:00:00.160) 0:01:14.051 ******* 2026-02-15 03:42:14.940243 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940251 | orchestrator | 2026-02-15 03:42:14.940259 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-02-15 03:42:14.940267 | orchestrator | Sunday 15 February 2026 03:42:11 +0000 (0:00:00.122) 0:01:14.174 ******* 2026-02-15 03:42:14.940274 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940282 | orchestrator | 2026-02-15 03:42:14.940290 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-02-15 03:42:14.940298 | orchestrator | Sunday 15 February 2026 03:42:11 +0000 (0:00:00.380) 0:01:14.554 ******* 2026-02-15 03:42:14.940306 | orchestrator | ok: [testbed-node-5] => { 2026-02-15 03:42:14.940327 | orchestrator |  "vgs_report": { 2026-02-15 03:42:14.940336 | orchestrator |  "vg": [] 2026-02-15 03:42:14.940362 | orchestrator |  } 2026-02-15 03:42:14.940370 | orchestrator | } 2026-02-15 03:42:14.940379 | orchestrator | 2026-02-15 03:42:14.940387 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-02-15 03:42:14.940395 | orchestrator | Sunday 15 February 2026 03:42:11 +0000 (0:00:00.166) 0:01:14.720 ******* 2026-02-15 03:42:14.940403 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940411 | orchestrator | 2026-02-15 03:42:14.940419 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-02-15 03:42:14.940426 | orchestrator | Sunday 15 February 2026 03:42:12 +0000 (0:00:00.157) 0:01:14.878 ******* 2026-02-15 03:42:14.940434 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940442 | orchestrator | 2026-02-15 03:42:14.940450 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-02-15 03:42:14.940458 | orchestrator | Sunday 15 February 2026 03:42:12 +0000 (0:00:00.140) 0:01:15.019 ******* 2026-02-15 03:42:14.940466 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940474 | orchestrator | 2026-02-15 03:42:14.940482 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-02-15 03:42:14.940490 | orchestrator | Sunday 15 February 2026 03:42:12 +0000 (0:00:00.146) 0:01:15.165 ******* 2026-02-15 03:42:14.940498 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940506 | orchestrator | 2026-02-15 03:42:14.940541 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-02-15 03:42:14.940552 | orchestrator | Sunday 15 February 2026 03:42:12 +0000 (0:00:00.155) 0:01:15.320 ******* 2026-02-15 03:42:14.940560 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940568 | orchestrator | 2026-02-15 03:42:14.940576 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-02-15 03:42:14.940584 | orchestrator | Sunday 15 February 2026 03:42:12 +0000 (0:00:00.159) 0:01:15.480 ******* 2026-02-15 03:42:14.940592 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940600 | orchestrator | 2026-02-15 03:42:14.940608 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-02-15 03:42:14.940616 | orchestrator | Sunday 15 February 2026 03:42:12 +0000 (0:00:00.148) 0:01:15.628 ******* 2026-02-15 03:42:14.940624 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940632 | orchestrator | 2026-02-15 03:42:14.940640 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-02-15 03:42:14.940648 | orchestrator | Sunday 15 February 2026 03:42:12 +0000 (0:00:00.151) 0:01:15.779 ******* 2026-02-15 03:42:14.940656 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940671 | orchestrator | 2026-02-15 03:42:14.940679 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-02-15 03:42:14.940687 | orchestrator | Sunday 15 February 2026 03:42:13 +0000 (0:00:00.175) 0:01:15.954 ******* 2026-02-15 03:42:14.940695 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940703 | orchestrator | 2026-02-15 03:42:14.940711 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-02-15 03:42:14.940719 | orchestrator | Sunday 15 February 2026 03:42:13 +0000 (0:00:00.169) 0:01:16.124 ******* 2026-02-15 03:42:14.940727 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940735 | orchestrator | 2026-02-15 03:42:14.940743 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-02-15 03:42:14.940751 | orchestrator | Sunday 15 February 2026 03:42:13 +0000 (0:00:00.158) 0:01:16.282 ******* 2026-02-15 03:42:14.940759 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940767 | orchestrator | 2026-02-15 03:42:14.940775 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-02-15 03:42:14.940783 | orchestrator | Sunday 15 February 2026 03:42:13 +0000 (0:00:00.417) 0:01:16.700 ******* 2026-02-15 03:42:14.940791 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940799 | orchestrator | 2026-02-15 03:42:14.940807 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-02-15 03:42:14.940815 | orchestrator | Sunday 15 February 2026 03:42:14 +0000 (0:00:00.185) 0:01:16.885 ******* 2026-02-15 03:42:14.940823 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940830 | orchestrator | 2026-02-15 03:42:14.940839 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-02-15 03:42:14.940847 | orchestrator | Sunday 15 February 2026 03:42:14 +0000 (0:00:00.153) 0:01:17.038 ******* 2026-02-15 03:42:14.940855 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940862 | orchestrator | 2026-02-15 03:42:14.940870 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-02-15 03:42:14.940878 | orchestrator | Sunday 15 February 2026 03:42:14 +0000 (0:00:00.145) 0:01:17.184 ******* 2026-02-15 03:42:14.940886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:14.940895 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:14.940903 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940911 | orchestrator | 2026-02-15 03:42:14.940919 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-02-15 03:42:14.940927 | orchestrator | Sunday 15 February 2026 03:42:14 +0000 (0:00:00.181) 0:01:17.366 ******* 2026-02-15 03:42:14.940935 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:14.940943 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:14.940951 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:14.940959 | orchestrator | 2026-02-15 03:42:14.940967 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-02-15 03:42:14.940979 | orchestrator | Sunday 15 February 2026 03:42:14 +0000 (0:00:00.168) 0:01:17.534 ******* 2026-02-15 03:42:14.940993 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:18.168422 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:18.168642 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:18.168671 | orchestrator | 2026-02-15 03:42:18.168685 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-02-15 03:42:18.168726 | orchestrator | Sunday 15 February 2026 03:42:14 +0000 (0:00:00.190) 0:01:17.724 ******* 2026-02-15 03:42:18.168738 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:18.168750 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:18.168761 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:18.168772 | orchestrator | 2026-02-15 03:42:18.168784 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-02-15 03:42:18.168795 | orchestrator | Sunday 15 February 2026 03:42:15 +0000 (0:00:00.163) 0:01:17.888 ******* 2026-02-15 03:42:18.168806 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:18.168817 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:18.168829 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:18.168839 | orchestrator | 2026-02-15 03:42:18.168851 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-02-15 03:42:18.168861 | orchestrator | Sunday 15 February 2026 03:42:15 +0000 (0:00:00.175) 0:01:18.063 ******* 2026-02-15 03:42:18.168872 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:18.168883 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:18.168895 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:18.168905 | orchestrator | 2026-02-15 03:42:18.168916 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-02-15 03:42:18.168927 | orchestrator | Sunday 15 February 2026 03:42:15 +0000 (0:00:00.154) 0:01:18.218 ******* 2026-02-15 03:42:18.168938 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:18.168952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:18.168964 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:18.168977 | orchestrator | 2026-02-15 03:42:18.168990 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-02-15 03:42:18.169003 | orchestrator | Sunday 15 February 2026 03:42:15 +0000 (0:00:00.181) 0:01:18.399 ******* 2026-02-15 03:42:18.169017 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:18.169029 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:18.169041 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:18.169054 | orchestrator | 2026-02-15 03:42:18.169067 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-02-15 03:42:18.169079 | orchestrator | Sunday 15 February 2026 03:42:15 +0000 (0:00:00.172) 0:01:18.572 ******* 2026-02-15 03:42:18.169090 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:42:18.169101 | orchestrator | 2026-02-15 03:42:18.169112 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-02-15 03:42:18.169123 | orchestrator | Sunday 15 February 2026 03:42:16 +0000 (0:00:00.771) 0:01:19.343 ******* 2026-02-15 03:42:18.169135 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:42:18.169146 | orchestrator | 2026-02-15 03:42:18.169164 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-02-15 03:42:18.169175 | orchestrator | Sunday 15 February 2026 03:42:17 +0000 (0:00:00.546) 0:01:19.890 ******* 2026-02-15 03:42:18.169186 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:42:18.169197 | orchestrator | 2026-02-15 03:42:18.169208 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-02-15 03:42:18.169220 | orchestrator | Sunday 15 February 2026 03:42:17 +0000 (0:00:00.158) 0:01:20.049 ******* 2026-02-15 03:42:18.169231 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'vg_name': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'}) 2026-02-15 03:42:18.169243 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'vg_name': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'}) 2026-02-15 03:42:18.169287 | orchestrator | 2026-02-15 03:42:18.169300 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-02-15 03:42:18.169311 | orchestrator | Sunday 15 February 2026 03:42:17 +0000 (0:00:00.188) 0:01:20.237 ******* 2026-02-15 03:42:18.169343 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:18.169354 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:18.169366 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:18.169377 | orchestrator | 2026-02-15 03:42:18.169388 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-02-15 03:42:18.169399 | orchestrator | Sunday 15 February 2026 03:42:17 +0000 (0:00:00.165) 0:01:20.403 ******* 2026-02-15 03:42:18.169410 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:18.169421 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:18.169432 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:18.169443 | orchestrator | 2026-02-15 03:42:18.169454 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-02-15 03:42:18.169465 | orchestrator | Sunday 15 February 2026 03:42:17 +0000 (0:00:00.184) 0:01:20.588 ******* 2026-02-15 03:42:18.169476 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 03:42:18.169487 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 03:42:18.169498 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:18.169535 | orchestrator | 2026-02-15 03:42:18.169548 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-02-15 03:42:18.169560 | orchestrator | Sunday 15 February 2026 03:42:17 +0000 (0:00:00.168) 0:01:20.756 ******* 2026-02-15 03:42:18.169571 | orchestrator | ok: [testbed-node-5] => { 2026-02-15 03:42:18.169581 | orchestrator |  "lvm_report": { 2026-02-15 03:42:18.169593 | orchestrator |  "lv": [ 2026-02-15 03:42:18.169604 | orchestrator |  { 2026-02-15 03:42:18.169616 | orchestrator |  "lv_name": "osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9", 2026-02-15 03:42:18.169627 | orchestrator |  "vg_name": "ceph-37190823-1b54-548e-8f85-c0a5c63b57f9" 2026-02-15 03:42:18.169639 | orchestrator |  }, 2026-02-15 03:42:18.169649 | orchestrator |  { 2026-02-15 03:42:18.169660 | orchestrator |  "lv_name": "osd-block-fe68aa92-7c5f-5213-9184-27150181e978", 2026-02-15 03:42:18.169671 | orchestrator |  "vg_name": "ceph-fe68aa92-7c5f-5213-9184-27150181e978" 2026-02-15 03:42:18.169695 | orchestrator |  } 2026-02-15 03:42:18.169706 | orchestrator |  ], 2026-02-15 03:42:18.169717 | orchestrator |  "pv": [ 2026-02-15 03:42:18.169728 | orchestrator |  { 2026-02-15 03:42:18.169738 | orchestrator |  "pv_name": "/dev/sdb", 2026-02-15 03:42:18.169749 | orchestrator |  "vg_name": "ceph-37190823-1b54-548e-8f85-c0a5c63b57f9" 2026-02-15 03:42:18.169760 | orchestrator |  }, 2026-02-15 03:42:18.169771 | orchestrator |  { 2026-02-15 03:42:18.169781 | orchestrator |  "pv_name": "/dev/sdc", 2026-02-15 03:42:18.169792 | orchestrator |  "vg_name": "ceph-fe68aa92-7c5f-5213-9184-27150181e978" 2026-02-15 03:42:18.169803 | orchestrator |  } 2026-02-15 03:42:18.169814 | orchestrator |  ] 2026-02-15 03:42:18.169824 | orchestrator |  } 2026-02-15 03:42:18.169835 | orchestrator | } 2026-02-15 03:42:18.169846 | orchestrator | 2026-02-15 03:42:18.169858 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:42:18.169868 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-15 03:42:18.169880 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-15 03:42:18.169890 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-02-15 03:42:18.169901 | orchestrator | 2026-02-15 03:42:18.169912 | orchestrator | 2026-02-15 03:42:18.169923 | orchestrator | 2026-02-15 03:42:18.169934 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:42:18.169944 | orchestrator | Sunday 15 February 2026 03:42:18 +0000 (0:00:00.172) 0:01:20.928 ******* 2026-02-15 03:42:18.169955 | orchestrator | =============================================================================== 2026-02-15 03:42:18.169966 | orchestrator | Create block VGs -------------------------------------------------------- 5.78s 2026-02-15 03:42:18.169977 | orchestrator | Create block LVs -------------------------------------------------------- 4.25s 2026-02-15 03:42:18.169987 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.86s 2026-02-15 03:42:18.169998 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.78s 2026-02-15 03:42:18.170008 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.64s 2026-02-15 03:42:18.170084 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.60s 2026-02-15 03:42:18.170105 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2026-02-15 03:42:18.170123 | orchestrator | Add known links to the list of available block devices ------------------ 1.46s 2026-02-15 03:42:18.170195 | orchestrator | Add known partitions to the list of available block devices ------------- 1.43s 2026-02-15 03:42:18.604401 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.36s 2026-02-15 03:42:18.604587 | orchestrator | Add known links to the list of available block devices ------------------ 1.03s 2026-02-15 03:42:18.604617 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-02-15 03:42:18.604636 | orchestrator | Calculate VG sizes (with buffer) ---------------------------------------- 0.88s 2026-02-15 03:42:18.604655 | orchestrator | Print LVM report data --------------------------------------------------- 0.87s 2026-02-15 03:42:18.604672 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-02-15 03:42:18.604688 | orchestrator | Get initial list of available block devices ----------------------------- 0.80s 2026-02-15 03:42:18.604706 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2026-02-15 03:42:18.604724 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-02-15 03:42:18.604744 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.79s 2026-02-15 03:42:18.604799 | orchestrator | Add known links to the list of available block devices ------------------ 0.78s 2026-02-15 03:42:31.237174 | orchestrator | 2026-02-15 03:42:31 | INFO  | Task 76bcf90d-f771-4656-ae23-4f28b785111c (facts) was prepared for execution. 2026-02-15 03:42:31.237255 | orchestrator | 2026-02-15 03:42:31 | INFO  | It takes a moment until task 76bcf90d-f771-4656-ae23-4f28b785111c (facts) has been started and output is visible here. 2026-02-15 03:42:45.340462 | orchestrator | 2026-02-15 03:42:45.340636 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-15 03:42:45.340652 | orchestrator | 2026-02-15 03:42:45.340662 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-15 03:42:45.340671 | orchestrator | Sunday 15 February 2026 03:42:36 +0000 (0:00:00.314) 0:00:00.314 ******* 2026-02-15 03:42:45.340679 | orchestrator | ok: [testbed-manager] 2026-02-15 03:42:45.340689 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:42:45.340697 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:42:45.340706 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:42:45.340714 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:42:45.340722 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:42:45.340730 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:42:45.340738 | orchestrator | 2026-02-15 03:42:45.340747 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-15 03:42:45.340755 | orchestrator | Sunday 15 February 2026 03:42:37 +0000 (0:00:01.266) 0:00:01.581 ******* 2026-02-15 03:42:45.340764 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:42:45.340772 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:42:45.340781 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:42:45.340789 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:42:45.340797 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:42:45.340805 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:42:45.340813 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:45.340821 | orchestrator | 2026-02-15 03:42:45.340829 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-15 03:42:45.340837 | orchestrator | 2026-02-15 03:42:45.340846 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-15 03:42:45.340854 | orchestrator | Sunday 15 February 2026 03:42:38 +0000 (0:00:01.406) 0:00:02.988 ******* 2026-02-15 03:42:45.340863 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:42:45.340871 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:42:45.340879 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:42:45.340887 | orchestrator | ok: [testbed-manager] 2026-02-15 03:42:45.340895 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:42:45.340903 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:42:45.340911 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:42:45.340919 | orchestrator | 2026-02-15 03:42:45.340927 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-15 03:42:45.340935 | orchestrator | 2026-02-15 03:42:45.340943 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-15 03:42:45.340951 | orchestrator | Sunday 15 February 2026 03:42:44 +0000 (0:00:05.468) 0:00:08.456 ******* 2026-02-15 03:42:45.340959 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:42:45.340967 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:42:45.340976 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:42:45.340984 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:42:45.340992 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:42:45.341000 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:42:45.341008 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:42:45.341017 | orchestrator | 2026-02-15 03:42:45.341027 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:42:45.341037 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:42:45.341048 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:42:45.341081 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:42:45.341091 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:42:45.341115 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:42:45.341125 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:42:45.341137 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:42:45.341150 | orchestrator | 2026-02-15 03:42:45.341164 | orchestrator | 2026-02-15 03:42:45.341177 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:42:45.341188 | orchestrator | Sunday 15 February 2026 03:42:44 +0000 (0:00:00.607) 0:00:09.064 ******* 2026-02-15 03:42:45.341201 | orchestrator | =============================================================================== 2026-02-15 03:42:45.341214 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.47s 2026-02-15 03:42:45.341226 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.41s 2026-02-15 03:42:45.341238 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.27s 2026-02-15 03:42:45.341249 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-02-15 03:42:47.921499 | orchestrator | 2026-02-15 03:42:47 | INFO  | Task 7f5dd455-0585-410b-97dd-c419c73f93e1 (ceph) was prepared for execution. 2026-02-15 03:42:47.921675 | orchestrator | 2026-02-15 03:42:47 | INFO  | It takes a moment until task 7f5dd455-0585-410b-97dd-c419c73f93e1 (ceph) has been started and output is visible here. 2026-02-15 03:43:07.476753 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-15 03:43:07.476869 | orchestrator | 2.16.14 2026-02-15 03:43:07.476883 | orchestrator | 2026-02-15 03:43:07.476893 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-02-15 03:43:07.476902 | orchestrator | 2026-02-15 03:43:07.476910 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 03:43:07.476918 | orchestrator | Sunday 15 February 2026 03:42:53 +0000 (0:00:00.958) 0:00:00.958 ******* 2026-02-15 03:43:07.476927 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:43:07.476935 | orchestrator | 2026-02-15 03:43:07.476943 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 03:43:07.476951 | orchestrator | Sunday 15 February 2026 03:42:54 +0000 (0:00:01.300) 0:00:02.259 ******* 2026-02-15 03:43:07.476958 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:07.476966 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:07.476973 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:07.476981 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:07.476989 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:07.476996 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:07.477004 | orchestrator | 2026-02-15 03:43:07.477011 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 03:43:07.477019 | orchestrator | Sunday 15 February 2026 03:42:56 +0000 (0:00:01.338) 0:00:03.597 ******* 2026-02-15 03:43:07.477026 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:07.477033 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:07.477040 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:07.477048 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:07.477055 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:07.477082 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:07.477090 | orchestrator | 2026-02-15 03:43:07.477098 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 03:43:07.477111 | orchestrator | Sunday 15 February 2026 03:42:57 +0000 (0:00:00.861) 0:00:04.458 ******* 2026-02-15 03:43:07.477123 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:07.477135 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:07.477147 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:07.477158 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:07.477169 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:07.477181 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:07.477191 | orchestrator | 2026-02-15 03:43:07.477201 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 03:43:07.477211 | orchestrator | Sunday 15 February 2026 03:42:57 +0000 (0:00:00.956) 0:00:05.415 ******* 2026-02-15 03:43:07.477222 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:07.477232 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:07.477243 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:07.477254 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:07.477265 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:07.477276 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:07.477288 | orchestrator | 2026-02-15 03:43:07.477301 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 03:43:07.477314 | orchestrator | Sunday 15 February 2026 03:42:58 +0000 (0:00:00.873) 0:00:06.289 ******* 2026-02-15 03:43:07.477328 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:07.477341 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:07.477353 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:07.477365 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:07.477378 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:07.477390 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:07.477402 | orchestrator | 2026-02-15 03:43:07.477414 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 03:43:07.477426 | orchestrator | Sunday 15 February 2026 03:42:59 +0000 (0:00:00.648) 0:00:06.937 ******* 2026-02-15 03:43:07.477437 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:07.477450 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:07.477462 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:07.477475 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:07.477490 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:07.477502 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:07.477542 | orchestrator | 2026-02-15 03:43:07.477553 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 03:43:07.477562 | orchestrator | Sunday 15 February 2026 03:43:00 +0000 (0:00:00.927) 0:00:07.864 ******* 2026-02-15 03:43:07.477571 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:07.477581 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:07.477603 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:07.477612 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:07.477620 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:07.477629 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:07.477638 | orchestrator | 2026-02-15 03:43:07.477645 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 03:43:07.477652 | orchestrator | Sunday 15 February 2026 03:43:01 +0000 (0:00:00.655) 0:00:08.519 ******* 2026-02-15 03:43:07.477660 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:07.477667 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:07.477674 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:07.477682 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:07.477689 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:07.477696 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:07.477703 | orchestrator | 2026-02-15 03:43:07.477711 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 03:43:07.477718 | orchestrator | Sunday 15 February 2026 03:43:02 +0000 (0:00:00.930) 0:00:09.449 ******* 2026-02-15 03:43:07.477726 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 03:43:07.477742 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 03:43:07.477749 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 03:43:07.477756 | orchestrator | 2026-02-15 03:43:07.477764 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 03:43:07.477775 | orchestrator | Sunday 15 February 2026 03:43:02 +0000 (0:00:00.685) 0:00:10.135 ******* 2026-02-15 03:43:07.477786 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:07.477802 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:07.477819 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:07.477854 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:07.477866 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:07.477878 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:07.477890 | orchestrator | 2026-02-15 03:43:07.477901 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 03:43:07.477912 | orchestrator | Sunday 15 February 2026 03:43:03 +0000 (0:00:00.850) 0:00:10.985 ******* 2026-02-15 03:43:07.477924 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 03:43:07.477935 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 03:43:07.477947 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 03:43:07.477959 | orchestrator | 2026-02-15 03:43:07.477971 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 03:43:07.477984 | orchestrator | Sunday 15 February 2026 03:43:05 +0000 (0:00:02.425) 0:00:13.411 ******* 2026-02-15 03:43:07.477996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 03:43:07.478009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 03:43:07.478095 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 03:43:07.478111 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:07.478123 | orchestrator | 2026-02-15 03:43:07.478135 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 03:43:07.478146 | orchestrator | Sunday 15 February 2026 03:43:06 +0000 (0:00:00.466) 0:00:13.877 ******* 2026-02-15 03:43:07.478160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 03:43:07.478175 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 03:43:07.478188 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 03:43:07.478201 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:07.478214 | orchestrator | 2026-02-15 03:43:07.478227 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 03:43:07.478239 | orchestrator | Sunday 15 February 2026 03:43:07 +0000 (0:00:00.633) 0:00:14.511 ******* 2026-02-15 03:43:07.478252 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:07.478263 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:07.478289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:07.478297 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:07.478304 | orchestrator | 2026-02-15 03:43:07.478312 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 03:43:07.478319 | orchestrator | Sunday 15 February 2026 03:43:07 +0000 (0:00:00.184) 0:00:14.696 ******* 2026-02-15 03:43:07.478339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 03:43:04.498543', 'end': '2026-02-15 03:43:04.541273', 'delta': '0:00:00.042730', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 03:43:18.024154 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 03:43:05.054370', 'end': '2026-02-15 03:43:05.094360', 'delta': '0:00:00.039990', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 03:43:18.024711 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 03:43:05.586944', 'end': '2026-02-15 03:43:05.631273', 'delta': '0:00:00.044329', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 03:43:18.024724 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.024730 | orchestrator | 2026-02-15 03:43:18.024735 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 03:43:18.024741 | orchestrator | Sunday 15 February 2026 03:43:07 +0000 (0:00:00.184) 0:00:14.880 ******* 2026-02-15 03:43:18.024745 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:18.024750 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:18.024754 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:18.024758 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:18.024761 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:18.024765 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:18.024769 | orchestrator | 2026-02-15 03:43:18.024901 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 03:43:18.024906 | orchestrator | Sunday 15 February 2026 03:43:08 +0000 (0:00:00.762) 0:00:15.642 ******* 2026-02-15 03:43:18.024910 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 03:43:18.024915 | orchestrator | 2026-02-15 03:43:18.024919 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 03:43:18.024923 | orchestrator | Sunday 15 February 2026 03:43:09 +0000 (0:00:00.881) 0:00:16.524 ******* 2026-02-15 03:43:18.024927 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.024931 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.024935 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.024939 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.024956 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.024961 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:18.024965 | orchestrator | 2026-02-15 03:43:18.024969 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 03:43:18.024973 | orchestrator | Sunday 15 February 2026 03:43:10 +0000 (0:00:00.928) 0:00:17.452 ******* 2026-02-15 03:43:18.024976 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.024980 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.024984 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.024999 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.025003 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.025006 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:18.025010 | orchestrator | 2026-02-15 03:43:18.025014 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 03:43:18.025018 | orchestrator | Sunday 15 February 2026 03:43:11 +0000 (0:00:01.248) 0:00:18.701 ******* 2026-02-15 03:43:18.025022 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.025025 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.025029 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.025033 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.025037 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.025041 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:18.025044 | orchestrator | 2026-02-15 03:43:18.025048 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 03:43:18.025052 | orchestrator | Sunday 15 February 2026 03:43:11 +0000 (0:00:00.649) 0:00:19.350 ******* 2026-02-15 03:43:18.025056 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.025059 | orchestrator | 2026-02-15 03:43:18.025063 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 03:43:18.025067 | orchestrator | Sunday 15 February 2026 03:43:12 +0000 (0:00:00.140) 0:00:19.490 ******* 2026-02-15 03:43:18.025071 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.025074 | orchestrator | 2026-02-15 03:43:18.025078 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 03:43:18.025082 | orchestrator | Sunday 15 February 2026 03:43:12 +0000 (0:00:00.232) 0:00:19.723 ******* 2026-02-15 03:43:18.025086 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.025090 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.025094 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.025097 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.025101 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.025105 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:18.025109 | orchestrator | 2026-02-15 03:43:18.025126 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 03:43:18.025131 | orchestrator | Sunday 15 February 2026 03:43:13 +0000 (0:00:00.863) 0:00:20.586 ******* 2026-02-15 03:43:18.025134 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.025138 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.025156 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.025160 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.025168 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.025172 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:18.025176 | orchestrator | 2026-02-15 03:43:18.025180 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 03:43:18.025184 | orchestrator | Sunday 15 February 2026 03:43:13 +0000 (0:00:00.650) 0:00:21.237 ******* 2026-02-15 03:43:18.025188 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.025191 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.025195 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.025199 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.025203 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.025207 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:18.025210 | orchestrator | 2026-02-15 03:43:18.025214 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 03:43:18.025218 | orchestrator | Sunday 15 February 2026 03:43:14 +0000 (0:00:00.909) 0:00:22.146 ******* 2026-02-15 03:43:18.025222 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.025226 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.025230 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.025233 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.025237 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.025241 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:18.025245 | orchestrator | 2026-02-15 03:43:18.025249 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 03:43:18.025252 | orchestrator | Sunday 15 February 2026 03:43:15 +0000 (0:00:00.655) 0:00:22.801 ******* 2026-02-15 03:43:18.025256 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.025260 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.025264 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.025268 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.025271 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.025275 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:18.025279 | orchestrator | 2026-02-15 03:43:18.025283 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 03:43:18.025287 | orchestrator | Sunday 15 February 2026 03:43:16 +0000 (0:00:00.877) 0:00:23.679 ******* 2026-02-15 03:43:18.025290 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.025294 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.025298 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.025302 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.025305 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.025309 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:18.025313 | orchestrator | 2026-02-15 03:43:18.025317 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 03:43:18.025322 | orchestrator | Sunday 15 February 2026 03:43:16 +0000 (0:00:00.658) 0:00:24.338 ******* 2026-02-15 03:43:18.025325 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.025329 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.025333 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.025337 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.025341 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.025345 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:18.025348 | orchestrator | 2026-02-15 03:43:18.025352 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 03:43:18.025356 | orchestrator | Sunday 15 February 2026 03:43:17 +0000 (0:00:00.907) 0:00:25.245 ******* 2026-02-15 03:43:18.025365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.025375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.025383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.157909 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.158061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.158077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.158087 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.158095 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.158104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.158127 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.158178 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.158203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.158223 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.158255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.158271 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.158293 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299426 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299647 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:18.299682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299706 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299745 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299880 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299927 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299942 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299956 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.299979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.300034 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.300074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.496258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.496355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.496368 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.496421 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.496429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.496435 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.496442 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.496448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.496469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.496476 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:18.496483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.496496 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.496510 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.496541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.693769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.693857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.693885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.693906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.693915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.693922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.693929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.693937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.693957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.693964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.693981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.693990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.693997 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:18.694005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.694056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.951604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:18.951611 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:18.951618 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:18.951624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:18.951668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:19.195455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:43:19.195628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:19.195647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:43:19.195657 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:19.195667 | orchestrator | 2026-02-15 03:43:19.195675 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 03:43:19.195684 | orchestrator | Sunday 15 February 2026 03:43:18 +0000 (0:00:01.110) 0:00:26.356 ******* 2026-02-15 03:43:19.195708 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.195736 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.195745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.195758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.195765 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.195773 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.195781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.195806 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.231509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.231638 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.231652 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.231659 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.231683 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.231690 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.231721 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.231731 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.231744 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.231757 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.660063 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.660161 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.660173 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.660199 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.660207 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.660216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.660241 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.660261 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.660283 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.660307 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805345 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805426 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805453 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805460 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805467 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805493 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805500 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805511 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805587 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805594 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805601 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805607 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:19.805629 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031654 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031762 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031776 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031814 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031822 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:20.031847 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031857 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031863 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031911 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031923 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031938 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031946 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.031964 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201493 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201606 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201614 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:20.201621 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201638 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201644 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201649 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201657 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201666 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201671 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201676 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.201688 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464160 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464254 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:20.464271 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:20.464282 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:20.464295 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464308 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464319 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464332 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464363 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464403 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464410 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464417 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464430 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:20.464448 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-36-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:43:32.838556 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:32.838644 | orchestrator | 2026-02-15 03:43:32.838653 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 03:43:32.838660 | orchestrator | Sunday 15 February 2026 03:43:20 +0000 (0:00:01.511) 0:00:27.868 ******* 2026-02-15 03:43:32.838665 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:32.838671 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:32.838676 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:32.838681 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:32.838686 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:32.838691 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:32.838696 | orchestrator | 2026-02-15 03:43:32.838701 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 03:43:32.838706 | orchestrator | Sunday 15 February 2026 03:43:21 +0000 (0:00:01.051) 0:00:28.920 ******* 2026-02-15 03:43:32.838710 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:32.838715 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:32.838719 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:32.838724 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:32.838729 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:32.838733 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:32.838738 | orchestrator | 2026-02-15 03:43:32.838743 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 03:43:32.838747 | orchestrator | Sunday 15 February 2026 03:43:22 +0000 (0:00:00.865) 0:00:29.785 ******* 2026-02-15 03:43:32.838752 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.838757 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:32.838761 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:32.838766 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:32.838770 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:32.838776 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:32.838833 | orchestrator | 2026-02-15 03:43:32.838840 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 03:43:32.838845 | orchestrator | Sunday 15 February 2026 03:43:22 +0000 (0:00:00.625) 0:00:30.411 ******* 2026-02-15 03:43:32.838849 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.838854 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:32.838859 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:32.838863 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:32.838868 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:32.838887 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:32.838892 | orchestrator | 2026-02-15 03:43:32.838897 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 03:43:32.838902 | orchestrator | Sunday 15 February 2026 03:43:23 +0000 (0:00:00.916) 0:00:31.327 ******* 2026-02-15 03:43:32.838906 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.838911 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:32.838915 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:32.838920 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:32.838924 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:32.838929 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:32.838934 | orchestrator | 2026-02-15 03:43:32.838938 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 03:43:32.838943 | orchestrator | Sunday 15 February 2026 03:43:24 +0000 (0:00:00.675) 0:00:32.002 ******* 2026-02-15 03:43:32.838948 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.838952 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:32.838957 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:32.838961 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:32.838966 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:32.838970 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:32.838975 | orchestrator | 2026-02-15 03:43:32.838980 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 03:43:32.838985 | orchestrator | Sunday 15 February 2026 03:43:25 +0000 (0:00:00.908) 0:00:32.911 ******* 2026-02-15 03:43:32.838990 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-15 03:43:32.838995 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-15 03:43:32.838999 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-15 03:43:32.839004 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-15 03:43:32.839008 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-15 03:43:32.839013 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 03:43:32.839017 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-15 03:43:32.839022 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-15 03:43:32.839026 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-15 03:43:32.839041 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-15 03:43:32.839046 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-15 03:43:32.839051 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-15 03:43:32.839055 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 03:43:32.839060 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-15 03:43:32.839064 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-15 03:43:32.839069 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-15 03:43:32.839073 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-15 03:43:32.839078 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 03:43:32.839082 | orchestrator | 2026-02-15 03:43:32.839087 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 03:43:32.839091 | orchestrator | Sunday 15 February 2026 03:43:27 +0000 (0:00:01.746) 0:00:34.658 ******* 2026-02-15 03:43:32.839097 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 03:43:32.839103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 03:43:32.839108 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 03:43:32.839114 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.839119 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-15 03:43:32.839124 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-15 03:43:32.839130 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-15 03:43:32.839146 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:32.839156 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-15 03:43:32.839161 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-15 03:43:32.839167 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-15 03:43:32.839172 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:32.839177 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 03:43:32.839182 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 03:43:32.839187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 03:43:32.839192 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:32.839197 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 03:43:32.839203 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 03:43:32.839208 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 03:43:32.839213 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:32.839218 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 03:43:32.839224 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 03:43:32.839229 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 03:43:32.839234 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:32.839239 | orchestrator | 2026-02-15 03:43:32.839245 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 03:43:32.839250 | orchestrator | Sunday 15 February 2026 03:43:28 +0000 (0:00:01.007) 0:00:35.666 ******* 2026-02-15 03:43:32.839255 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:32.839260 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:32.839266 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:32.839272 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:43:32.839277 | orchestrator | 2026-02-15 03:43:32.839283 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 03:43:32.839290 | orchestrator | Sunday 15 February 2026 03:43:29 +0000 (0:00:01.081) 0:00:36.747 ******* 2026-02-15 03:43:32.839295 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.839301 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:32.839306 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:32.839311 | orchestrator | 2026-02-15 03:43:32.839317 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 03:43:32.839325 | orchestrator | Sunday 15 February 2026 03:43:29 +0000 (0:00:00.361) 0:00:37.109 ******* 2026-02-15 03:43:32.839333 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.839341 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:32.839348 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:32.839356 | orchestrator | 2026-02-15 03:43:32.839363 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 03:43:32.839371 | orchestrator | Sunday 15 February 2026 03:43:30 +0000 (0:00:00.376) 0:00:37.485 ******* 2026-02-15 03:43:32.839379 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.839386 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:32.839393 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:32.839401 | orchestrator | 2026-02-15 03:43:32.839409 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 03:43:32.839417 | orchestrator | Sunday 15 February 2026 03:43:30 +0000 (0:00:00.384) 0:00:37.870 ******* 2026-02-15 03:43:32.839424 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:32.839432 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:32.839439 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:32.839447 | orchestrator | 2026-02-15 03:43:32.839455 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 03:43:32.839462 | orchestrator | Sunday 15 February 2026 03:43:31 +0000 (0:00:00.741) 0:00:38.612 ******* 2026-02-15 03:43:32.839477 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:43:32.839485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:43:32.839493 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:43:32.839502 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.839508 | orchestrator | 2026-02-15 03:43:32.839513 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 03:43:32.839557 | orchestrator | Sunday 15 February 2026 03:43:31 +0000 (0:00:00.449) 0:00:39.062 ******* 2026-02-15 03:43:32.839562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:43:32.839567 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:43:32.839571 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:43:32.839576 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.839580 | orchestrator | 2026-02-15 03:43:32.839585 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 03:43:32.839590 | orchestrator | Sunday 15 February 2026 03:43:32 +0000 (0:00:00.410) 0:00:39.473 ******* 2026-02-15 03:43:32.839594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:43:32.839599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:43:32.839603 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:43:32.839608 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:32.839612 | orchestrator | 2026-02-15 03:43:32.839617 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 03:43:32.839621 | orchestrator | Sunday 15 February 2026 03:43:32 +0000 (0:00:00.399) 0:00:39.873 ******* 2026-02-15 03:43:32.839626 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:32.839630 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:32.839635 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:32.839639 | orchestrator | 2026-02-15 03:43:32.839644 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 03:43:32.839654 | orchestrator | Sunday 15 February 2026 03:43:32 +0000 (0:00:00.369) 0:00:40.242 ******* 2026-02-15 03:43:53.829766 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 03:43:53.829900 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-15 03:43:53.829917 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-15 03:43:53.829928 | orchestrator | 2026-02-15 03:43:53.829947 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 03:43:53.829966 | orchestrator | Sunday 15 February 2026 03:43:33 +0000 (0:00:01.095) 0:00:41.337 ******* 2026-02-15 03:43:53.829984 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 03:43:53.830000 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 03:43:53.830012 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 03:43:53.830109 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-15 03:43:53.830139 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 03:43:53.830159 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 03:43:53.830177 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 03:43:53.830192 | orchestrator | 2026-02-15 03:43:53.830207 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 03:43:53.830223 | orchestrator | Sunday 15 February 2026 03:43:34 +0000 (0:00:00.907) 0:00:42.245 ******* 2026-02-15 03:43:53.830242 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 03:43:53.830260 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 03:43:53.830280 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 03:43:53.830329 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-15 03:43:53.830346 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 03:43:53.830358 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 03:43:53.830374 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 03:43:53.830392 | orchestrator | 2026-02-15 03:43:53.830409 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 03:43:53.830427 | orchestrator | Sunday 15 February 2026 03:43:36 +0000 (0:00:02.089) 0:00:44.334 ******* 2026-02-15 03:43:53.830446 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:43:53.830465 | orchestrator | 2026-02-15 03:43:53.830477 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 03:43:53.830489 | orchestrator | Sunday 15 February 2026 03:43:38 +0000 (0:00:01.380) 0:00:45.714 ******* 2026-02-15 03:43:53.830502 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:43:53.830589 | orchestrator | 2026-02-15 03:43:53.830613 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 03:43:53.830631 | orchestrator | Sunday 15 February 2026 03:43:39 +0000 (0:00:01.323) 0:00:47.038 ******* 2026-02-15 03:43:53.830650 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:53.830669 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:53.830687 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:53.830704 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:53.830722 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:53.830737 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:53.830755 | orchestrator | 2026-02-15 03:43:53.830772 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 03:43:53.830785 | orchestrator | Sunday 15 February 2026 03:43:40 +0000 (0:00:01.323) 0:00:48.361 ******* 2026-02-15 03:43:53.830795 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:53.830804 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:53.830814 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:53.830843 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:53.830860 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:53.830877 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:53.830894 | orchestrator | 2026-02-15 03:43:53.830911 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 03:43:53.830926 | orchestrator | Sunday 15 February 2026 03:43:41 +0000 (0:00:00.749) 0:00:49.110 ******* 2026-02-15 03:43:53.830939 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:53.830957 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:53.830973 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:53.830983 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:53.830993 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:53.831003 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:53.831012 | orchestrator | 2026-02-15 03:43:53.831022 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 03:43:53.831032 | orchestrator | Sunday 15 February 2026 03:43:42 +0000 (0:00:00.920) 0:00:50.031 ******* 2026-02-15 03:43:53.831041 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:53.831051 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:53.831060 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:53.831070 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:53.831079 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:53.831089 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:53.831098 | orchestrator | 2026-02-15 03:43:53.831108 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 03:43:53.831128 | orchestrator | Sunday 15 February 2026 03:43:43 +0000 (0:00:00.743) 0:00:50.775 ******* 2026-02-15 03:43:53.831144 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:53.831159 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:53.831201 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:53.831219 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:53.831237 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:53.831254 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:53.831267 | orchestrator | 2026-02-15 03:43:53.831279 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 03:43:53.831294 | orchestrator | Sunday 15 February 2026 03:43:44 +0000 (0:00:01.323) 0:00:52.099 ******* 2026-02-15 03:43:53.831312 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:53.831327 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:53.831345 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:53.831361 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:53.831378 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:53.831395 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:53.831405 | orchestrator | 2026-02-15 03:43:53.831414 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 03:43:53.831424 | orchestrator | Sunday 15 February 2026 03:43:45 +0000 (0:00:00.678) 0:00:52.777 ******* 2026-02-15 03:43:53.831433 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:53.831443 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:53.831452 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:53.831462 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:53.831471 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:53.831480 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:53.831490 | orchestrator | 2026-02-15 03:43:53.831499 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 03:43:53.831509 | orchestrator | Sunday 15 February 2026 03:43:46 +0000 (0:00:00.890) 0:00:53.668 ******* 2026-02-15 03:43:53.831540 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:53.831559 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:53.831569 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:53.831578 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:53.831588 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:53.831597 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:53.831607 | orchestrator | 2026-02-15 03:43:53.831616 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 03:43:53.831626 | orchestrator | Sunday 15 February 2026 03:43:47 +0000 (0:00:01.065) 0:00:54.734 ******* 2026-02-15 03:43:53.831635 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:53.831649 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:53.831664 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:53.831682 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:53.831698 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:53.831714 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:53.831724 | orchestrator | 2026-02-15 03:43:53.831734 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 03:43:53.831743 | orchestrator | Sunday 15 February 2026 03:43:48 +0000 (0:00:01.457) 0:00:56.191 ******* 2026-02-15 03:43:53.831753 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:53.831763 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:53.831772 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:53.831782 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:53.831791 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:53.831801 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:53.831815 | orchestrator | 2026-02-15 03:43:53.831831 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 03:43:53.831848 | orchestrator | Sunday 15 February 2026 03:43:49 +0000 (0:00:00.668) 0:00:56.860 ******* 2026-02-15 03:43:53.831865 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:53.831877 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:53.831887 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:53.831906 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:43:53.831917 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:43:53.831933 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:43:53.831950 | orchestrator | 2026-02-15 03:43:53.831966 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 03:43:53.831982 | orchestrator | Sunday 15 February 2026 03:43:50 +0000 (0:00:00.919) 0:00:57.779 ******* 2026-02-15 03:43:53.831998 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:53.832015 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:53.832032 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:53.832049 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:53.832065 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:53.832082 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:53.832098 | orchestrator | 2026-02-15 03:43:53.832114 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 03:43:53.832129 | orchestrator | Sunday 15 February 2026 03:43:50 +0000 (0:00:00.632) 0:00:58.412 ******* 2026-02-15 03:43:53.832146 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:53.832163 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:53.832180 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:53.832197 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:53.832207 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:53.832216 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:53.832226 | orchestrator | 2026-02-15 03:43:53.832235 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 03:43:53.832245 | orchestrator | Sunday 15 February 2026 03:43:51 +0000 (0:00:00.939) 0:00:59.352 ******* 2026-02-15 03:43:53.832255 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:43:53.832264 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:43:53.832273 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:43:53.832324 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:53.832334 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:53.832344 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:53.832353 | orchestrator | 2026-02-15 03:43:53.832363 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 03:43:53.832372 | orchestrator | Sunday 15 February 2026 03:43:52 +0000 (0:00:00.637) 0:00:59.990 ******* 2026-02-15 03:43:53.832382 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:53.832391 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:43:53.832401 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:43:53.832410 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:43:53.832419 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:43:53.832429 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:43:53.832438 | orchestrator | 2026-02-15 03:43:53.832448 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 03:43:53.832458 | orchestrator | Sunday 15 February 2026 03:43:53 +0000 (0:00:00.924) 0:01:00.914 ******* 2026-02-15 03:43:53.832467 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:43:53.832485 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:06.417431 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:06.417630 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:06.417655 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:06.417667 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:06.417677 | orchestrator | 2026-02-15 03:45:06.417688 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 03:45:06.417700 | orchestrator | Sunday 15 February 2026 03:43:54 +0000 (0:00:00.664) 0:01:01.579 ******* 2026-02-15 03:45:06.417710 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:06.417720 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:06.417730 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:06.417739 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:45:06.417750 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:45:06.417760 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:45:06.417770 | orchestrator | 2026-02-15 03:45:06.417780 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 03:45:06.417814 | orchestrator | Sunday 15 February 2026 03:43:55 +0000 (0:00:00.964) 0:01:02.544 ******* 2026-02-15 03:45:06.417824 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:06.417834 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:06.417844 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:06.417853 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:45:06.417863 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:45:06.417873 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:45:06.417882 | orchestrator | 2026-02-15 03:45:06.417892 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 03:45:06.417902 | orchestrator | Sunday 15 February 2026 03:43:55 +0000 (0:00:00.683) 0:01:03.228 ******* 2026-02-15 03:45:06.417912 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:06.417921 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:06.417931 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:06.417941 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:45:06.417951 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:45:06.417961 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:45:06.417970 | orchestrator | 2026-02-15 03:45:06.417982 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 03:45:06.417993 | orchestrator | Sunday 15 February 2026 03:43:57 +0000 (0:00:01.616) 0:01:04.844 ******* 2026-02-15 03:45:06.418005 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:45:06.418070 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:45:06.418084 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:45:06.418095 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:45:06.418107 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:45:06.418119 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:45:06.418130 | orchestrator | 2026-02-15 03:45:06.418141 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 03:45:06.418151 | orchestrator | Sunday 15 February 2026 03:43:59 +0000 (0:00:01.902) 0:01:06.747 ******* 2026-02-15 03:45:06.418160 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:45:06.418170 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:45:06.418179 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:45:06.418189 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:45:06.418199 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:45:06.418208 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:45:06.418217 | orchestrator | 2026-02-15 03:45:06.418227 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 03:45:06.418236 | orchestrator | Sunday 15 February 2026 03:44:01 +0000 (0:00:02.191) 0:01:08.938 ******* 2026-02-15 03:45:06.418247 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:45:06.418258 | orchestrator | 2026-02-15 03:45:06.418268 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 03:45:06.418278 | orchestrator | Sunday 15 February 2026 03:44:03 +0000 (0:00:01.669) 0:01:10.608 ******* 2026-02-15 03:45:06.418287 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:06.418297 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:06.418306 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:06.418316 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:06.418325 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:06.418334 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:06.418344 | orchestrator | 2026-02-15 03:45:06.418354 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 03:45:06.418363 | orchestrator | Sunday 15 February 2026 03:44:03 +0000 (0:00:00.710) 0:01:11.318 ******* 2026-02-15 03:45:06.418373 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:06.418382 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:06.418405 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:06.418415 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:06.418432 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:06.418442 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:06.418451 | orchestrator | 2026-02-15 03:45:06.418461 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 03:45:06.418471 | orchestrator | Sunday 15 February 2026 03:44:04 +0000 (0:00:00.901) 0:01:12.219 ******* 2026-02-15 03:45:06.418480 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 03:45:06.418490 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 03:45:06.418499 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 03:45:06.418509 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 03:45:06.418518 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 03:45:06.418527 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 03:45:06.418561 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 03:45:06.418572 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 03:45:06.418582 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 03:45:06.418610 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 03:45:06.418627 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 03:45:06.418643 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 03:45:06.418657 | orchestrator | 2026-02-15 03:45:06.418673 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 03:45:06.418689 | orchestrator | Sunday 15 February 2026 03:44:06 +0000 (0:00:01.389) 0:01:13.609 ******* 2026-02-15 03:45:06.418705 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:45:06.418719 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:45:06.418733 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:45:06.418749 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:45:06.418765 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:45:06.418781 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:45:06.418798 | orchestrator | 2026-02-15 03:45:06.418815 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 03:45:06.418831 | orchestrator | Sunday 15 February 2026 03:44:07 +0000 (0:00:01.183) 0:01:14.792 ******* 2026-02-15 03:45:06.418847 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:06.418861 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:06.418871 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:06.418880 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:06.418890 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:06.418899 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:06.418909 | orchestrator | 2026-02-15 03:45:06.418919 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 03:45:06.418928 | orchestrator | Sunday 15 February 2026 03:44:08 +0000 (0:00:00.686) 0:01:15.478 ******* 2026-02-15 03:45:06.418938 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:06.418948 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:06.418957 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:06.418967 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:06.418976 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:06.418986 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:06.418996 | orchestrator | 2026-02-15 03:45:06.419006 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 03:45:06.419015 | orchestrator | Sunday 15 February 2026 03:44:08 +0000 (0:00:00.889) 0:01:16.368 ******* 2026-02-15 03:45:06.419025 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:06.419035 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:06.419053 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:06.419063 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:06.419072 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:06.419082 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:06.419091 | orchestrator | 2026-02-15 03:45:06.419101 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 03:45:06.419110 | orchestrator | Sunday 15 February 2026 03:44:09 +0000 (0:00:00.670) 0:01:17.038 ******* 2026-02-15 03:45:06.419120 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:45:06.419130 | orchestrator | 2026-02-15 03:45:06.419140 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 03:45:06.419149 | orchestrator | Sunday 15 February 2026 03:44:11 +0000 (0:00:01.448) 0:01:18.486 ******* 2026-02-15 03:45:06.419159 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:06.419169 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:45:06.419178 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:45:06.419188 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:45:06.419197 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:06.419207 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:06.419216 | orchestrator | 2026-02-15 03:45:06.419226 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 03:45:06.419236 | orchestrator | Sunday 15 February 2026 03:45:05 +0000 (0:00:54.559) 0:02:13.046 ******* 2026-02-15 03:45:06.419246 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 03:45:06.419255 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 03:45:06.419265 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 03:45:06.419274 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:06.419291 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 03:45:06.419301 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 03:45:06.419310 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 03:45:06.419320 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:06.419330 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 03:45:06.419340 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 03:45:06.419349 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 03:45:06.419359 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:06.419369 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 03:45:06.419379 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 03:45:06.419388 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 03:45:06.419398 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:06.419408 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 03:45:06.419417 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 03:45:06.419427 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 03:45:06.419445 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.656192 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 03:45:30.656332 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 03:45:30.656351 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 03:45:30.656364 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.656377 | orchestrator | 2026-02-15 03:45:30.656390 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 03:45:30.656425 | orchestrator | Sunday 15 February 2026 03:45:06 +0000 (0:00:00.779) 0:02:13.825 ******* 2026-02-15 03:45:30.656437 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.656448 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.656460 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.656471 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.656482 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.656493 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.656504 | orchestrator | 2026-02-15 03:45:30.656515 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 03:45:30.656526 | orchestrator | Sunday 15 February 2026 03:45:07 +0000 (0:00:00.899) 0:02:14.725 ******* 2026-02-15 03:45:30.656595 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.656610 | orchestrator | 2026-02-15 03:45:30.656622 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 03:45:30.656634 | orchestrator | Sunday 15 February 2026 03:45:07 +0000 (0:00:00.172) 0:02:14.897 ******* 2026-02-15 03:45:30.656645 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.656656 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.656667 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.656678 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.656689 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.656700 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.656711 | orchestrator | 2026-02-15 03:45:30.656722 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 03:45:30.656733 | orchestrator | Sunday 15 February 2026 03:45:08 +0000 (0:00:00.700) 0:02:15.598 ******* 2026-02-15 03:45:30.656745 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.656755 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.656766 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.656777 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.656788 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.656799 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.656810 | orchestrator | 2026-02-15 03:45:30.656821 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 03:45:30.656833 | orchestrator | Sunday 15 February 2026 03:45:09 +0000 (0:00:00.911) 0:02:16.509 ******* 2026-02-15 03:45:30.656845 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.656856 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.656867 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.656878 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.656889 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.656900 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.656911 | orchestrator | 2026-02-15 03:45:30.656922 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 03:45:30.656933 | orchestrator | Sunday 15 February 2026 03:45:09 +0000 (0:00:00.658) 0:02:17.167 ******* 2026-02-15 03:45:30.656944 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:30.656956 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:30.656967 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:45:30.656979 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:30.656990 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:45:30.657000 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:45:30.657011 | orchestrator | 2026-02-15 03:45:30.657023 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 03:45:30.657034 | orchestrator | Sunday 15 February 2026 03:45:13 +0000 (0:00:03.500) 0:02:20.668 ******* 2026-02-15 03:45:30.657045 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:30.657056 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:30.657067 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:30.657077 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:45:30.657088 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:45:30.657099 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:45:30.657118 | orchestrator | 2026-02-15 03:45:30.657130 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 03:45:30.657141 | orchestrator | Sunday 15 February 2026 03:45:13 +0000 (0:00:00.635) 0:02:21.304 ******* 2026-02-15 03:45:30.657168 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:45:30.657181 | orchestrator | 2026-02-15 03:45:30.657193 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 03:45:30.657204 | orchestrator | Sunday 15 February 2026 03:45:15 +0000 (0:00:01.396) 0:02:22.700 ******* 2026-02-15 03:45:30.657215 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.657226 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.657237 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.657248 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.657259 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.657270 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.657281 | orchestrator | 2026-02-15 03:45:30.657292 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 03:45:30.657303 | orchestrator | Sunday 15 February 2026 03:45:16 +0000 (0:00:00.944) 0:02:23.644 ******* 2026-02-15 03:45:30.657314 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.657325 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.657335 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.657346 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.657357 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.657368 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.657379 | orchestrator | 2026-02-15 03:45:30.657390 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 03:45:30.657402 | orchestrator | Sunday 15 February 2026 03:45:16 +0000 (0:00:00.710) 0:02:24.354 ******* 2026-02-15 03:45:30.657413 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.657443 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.657455 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.657466 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.657477 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.657488 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.657499 | orchestrator | 2026-02-15 03:45:30.657510 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 03:45:30.657521 | orchestrator | Sunday 15 February 2026 03:45:17 +0000 (0:00:01.031) 0:02:25.386 ******* 2026-02-15 03:45:30.657532 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.657560 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.657571 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.657582 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.657593 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.657603 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.657614 | orchestrator | 2026-02-15 03:45:30.657625 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 03:45:30.657636 | orchestrator | Sunday 15 February 2026 03:45:18 +0000 (0:00:00.673) 0:02:26.060 ******* 2026-02-15 03:45:30.657647 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.657669 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.657680 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.657691 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.657702 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.657712 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.657723 | orchestrator | 2026-02-15 03:45:30.657734 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 03:45:30.657745 | orchestrator | Sunday 15 February 2026 03:45:19 +0000 (0:00:00.967) 0:02:27.027 ******* 2026-02-15 03:45:30.657756 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.657767 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.657785 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.657796 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.657807 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.657818 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.657829 | orchestrator | 2026-02-15 03:45:30.657840 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 03:45:30.657851 | orchestrator | Sunday 15 February 2026 03:45:20 +0000 (0:00:00.682) 0:02:27.710 ******* 2026-02-15 03:45:30.657862 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.657873 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.657883 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.657894 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.657905 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.657916 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.657926 | orchestrator | 2026-02-15 03:45:30.657937 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 03:45:30.657949 | orchestrator | Sunday 15 February 2026 03:45:21 +0000 (0:00:00.929) 0:02:28.639 ******* 2026-02-15 03:45:30.657960 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:30.657970 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:30.657981 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:30.657992 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:30.658003 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:30.658013 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:30.658094 | orchestrator | 2026-02-15 03:45:30.658105 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 03:45:30.658117 | orchestrator | Sunday 15 February 2026 03:45:21 +0000 (0:00:00.669) 0:02:29.308 ******* 2026-02-15 03:45:30.658128 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:30.658138 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:30.658150 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:30.658161 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:45:30.658171 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:45:30.658268 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:45:30.658283 | orchestrator | 2026-02-15 03:45:30.658295 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 03:45:30.658306 | orchestrator | Sunday 15 February 2026 03:45:23 +0000 (0:00:01.389) 0:02:30.697 ******* 2026-02-15 03:45:30.658318 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:45:30.658331 | orchestrator | 2026-02-15 03:45:30.658342 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 03:45:30.658360 | orchestrator | Sunday 15 February 2026 03:45:24 +0000 (0:00:01.385) 0:02:32.083 ******* 2026-02-15 03:45:30.658372 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-02-15 03:45:30.658383 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-15 03:45:30.658394 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-02-15 03:45:30.658405 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-02-15 03:45:30.658416 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-02-15 03:45:30.658426 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-15 03:45:30.658437 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-15 03:45:30.658448 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-02-15 03:45:30.658458 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-15 03:45:30.658469 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-02-15 03:45:30.658479 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-15 03:45:30.658490 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-15 03:45:30.658501 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-15 03:45:30.658512 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-15 03:45:30.658532 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-15 03:45:30.658604 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-15 03:45:30.658617 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-15 03:45:30.658641 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-15 03:45:36.257056 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-15 03:45:36.257201 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-15 03:45:36.257228 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-15 03:45:36.257246 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-15 03:45:36.257264 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-15 03:45:36.257282 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-15 03:45:36.257300 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-15 03:45:36.257317 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-15 03:45:36.257334 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-15 03:45:36.257351 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-15 03:45:36.257370 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-15 03:45:36.257387 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-15 03:45:36.257403 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-15 03:45:36.257421 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-15 03:45:36.257438 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-15 03:45:36.257455 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-15 03:45:36.257474 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-15 03:45:36.257493 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-15 03:45:36.257510 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-15 03:45:36.257528 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-15 03:45:36.257675 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-15 03:45:36.257702 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 03:45:36.257726 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-15 03:45:36.257748 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-15 03:45:36.257768 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-15 03:45:36.257788 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-15 03:45:36.257808 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-15 03:45:36.257833 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 03:45:36.257857 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-15 03:45:36.257880 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-15 03:45:36.257907 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 03:45:36.257929 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 03:45:36.257949 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-15 03:45:36.257969 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 03:45:36.257990 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-15 03:45:36.258010 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 03:45:36.258113 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 03:45:36.258136 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 03:45:36.258197 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 03:45:36.258221 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 03:45:36.258242 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 03:45:36.258262 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 03:45:36.258283 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 03:45:36.258323 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 03:45:36.258346 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 03:45:36.258366 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 03:45:36.258388 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 03:45:36.258408 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 03:45:36.258429 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 03:45:36.258448 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 03:45:36.258467 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 03:45:36.258487 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 03:45:36.258508 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 03:45:36.258528 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 03:45:36.258577 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 03:45:36.258599 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-02-15 03:45:36.258619 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 03:45:36.258639 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 03:45:36.258689 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 03:45:36.258713 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 03:45:36.258731 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 03:45:36.258749 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-02-15 03:45:36.258767 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 03:45:36.258786 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 03:45:36.258806 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 03:45:36.258826 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 03:45:36.258846 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-02-15 03:45:36.258866 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 03:45:36.258886 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 03:45:36.258906 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-02-15 03:45:36.258927 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-02-15 03:45:36.258946 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-02-15 03:45:36.258965 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-02-15 03:45:36.258983 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-02-15 03:45:36.259001 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-02-15 03:45:36.259019 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-02-15 03:45:36.259038 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-02-15 03:45:36.259055 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-02-15 03:45:36.259073 | orchestrator | 2026-02-15 03:45:36.259111 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 03:45:36.259130 | orchestrator | Sunday 15 February 2026 03:45:30 +0000 (0:00:05.965) 0:02:38.048 ******* 2026-02-15 03:45:36.259149 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:36.259168 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:36.259187 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:36.259206 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:45:36.259226 | orchestrator | 2026-02-15 03:45:36.259246 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-15 03:45:36.259264 | orchestrator | Sunday 15 February 2026 03:45:31 +0000 (0:00:01.156) 0:02:39.205 ******* 2026-02-15 03:45:36.259283 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 03:45:36.259303 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 03:45:36.259322 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 03:45:36.259342 | orchestrator | 2026-02-15 03:45:36.259361 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-15 03:45:36.259379 | orchestrator | Sunday 15 February 2026 03:45:32 +0000 (0:00:00.736) 0:02:39.941 ******* 2026-02-15 03:45:36.259397 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 03:45:36.259414 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 03:45:36.259433 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 03:45:36.259451 | orchestrator | 2026-02-15 03:45:36.259479 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 03:45:36.259498 | orchestrator | Sunday 15 February 2026 03:45:33 +0000 (0:00:01.153) 0:02:41.095 ******* 2026-02-15 03:45:36.259515 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:36.259533 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:36.259584 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:36.259606 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:36.259626 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:36.259645 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:36.259664 | orchestrator | 2026-02-15 03:45:36.259684 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 03:45:36.259703 | orchestrator | Sunday 15 February 2026 03:45:34 +0000 (0:00:00.957) 0:02:42.052 ******* 2026-02-15 03:45:36.259722 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:36.259741 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:36.259757 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:36.259773 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:36.259790 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:36.259807 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:36.259824 | orchestrator | 2026-02-15 03:45:36.259841 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 03:45:36.259859 | orchestrator | Sunday 15 February 2026 03:45:35 +0000 (0:00:00.659) 0:02:42.712 ******* 2026-02-15 03:45:36.259877 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:36.259895 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:36.259914 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:36.259932 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:36.259951 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:36.259969 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:36.259987 | orchestrator | 2026-02-15 03:45:36.260025 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 03:45:50.582264 | orchestrator | Sunday 15 February 2026 03:45:36 +0000 (0:00:00.950) 0:02:43.662 ******* 2026-02-15 03:45:50.582346 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.582354 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.582358 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.582363 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582367 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582372 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582377 | orchestrator | 2026-02-15 03:45:50.582382 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 03:45:50.582387 | orchestrator | Sunday 15 February 2026 03:45:36 +0000 (0:00:00.658) 0:02:44.321 ******* 2026-02-15 03:45:50.582391 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.582396 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.582400 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.582404 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582409 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582413 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582417 | orchestrator | 2026-02-15 03:45:50.582422 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 03:45:50.582427 | orchestrator | Sunday 15 February 2026 03:45:37 +0000 (0:00:00.884) 0:02:45.206 ******* 2026-02-15 03:45:50.582431 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.582435 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.582440 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.582444 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582448 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582452 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582467 | orchestrator | 2026-02-15 03:45:50.582472 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 03:45:50.582481 | orchestrator | Sunday 15 February 2026 03:45:38 +0000 (0:00:00.717) 0:02:45.924 ******* 2026-02-15 03:45:50.582486 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.582490 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.582494 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.582499 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582503 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582507 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582511 | orchestrator | 2026-02-15 03:45:50.582516 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 03:45:50.582520 | orchestrator | Sunday 15 February 2026 03:45:39 +0000 (0:00:01.025) 0:02:46.949 ******* 2026-02-15 03:45:50.582524 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.582529 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.582533 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.582537 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582574 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582582 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582589 | orchestrator | 2026-02-15 03:45:50.582597 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 03:45:50.582604 | orchestrator | Sunday 15 February 2026 03:45:40 +0000 (0:00:00.698) 0:02:47.647 ******* 2026-02-15 03:45:50.582612 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582617 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582621 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582626 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:50.582631 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:50.582635 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:50.582640 | orchestrator | 2026-02-15 03:45:50.582644 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 03:45:50.582649 | orchestrator | Sunday 15 February 2026 03:45:43 +0000 (0:00:02.827) 0:02:50.475 ******* 2026-02-15 03:45:50.582669 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:50.582674 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:50.582678 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:50.582682 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582687 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582691 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582695 | orchestrator | 2026-02-15 03:45:50.582699 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 03:45:50.582704 | orchestrator | Sunday 15 February 2026 03:45:43 +0000 (0:00:00.767) 0:02:51.242 ******* 2026-02-15 03:45:50.582719 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:45:50.582723 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:45:50.582728 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:45:50.582732 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582736 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582740 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582744 | orchestrator | 2026-02-15 03:45:50.582749 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 03:45:50.582753 | orchestrator | Sunday 15 February 2026 03:45:44 +0000 (0:00:00.996) 0:02:52.239 ******* 2026-02-15 03:45:50.582757 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.582761 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.582765 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.582770 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582774 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582778 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582782 | orchestrator | 2026-02-15 03:45:50.582787 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 03:45:50.582791 | orchestrator | Sunday 15 February 2026 03:45:45 +0000 (0:00:00.761) 0:02:53.000 ******* 2026-02-15 03:45:50.582795 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 03:45:50.582811 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 03:45:50.582818 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 03:45:50.582832 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582854 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582862 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582869 | orchestrator | 2026-02-15 03:45:50.582876 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 03:45:50.582883 | orchestrator | Sunday 15 February 2026 03:45:46 +0000 (0:00:00.950) 0:02:53.950 ******* 2026-02-15 03:45:50.582892 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-02-15 03:45:50.582902 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-02-15 03:45:50.582910 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.582918 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-02-15 03:45:50.582925 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-02-15 03:45:50.582940 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.582947 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-02-15 03:45:50.582952 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-02-15 03:45:50.582958 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.582963 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.582968 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.582973 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.582978 | orchestrator | 2026-02-15 03:45:50.582983 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 03:45:50.582988 | orchestrator | Sunday 15 February 2026 03:45:47 +0000 (0:00:00.748) 0:02:54.699 ******* 2026-02-15 03:45:50.582993 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.582998 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.583002 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.583007 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.583012 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.583017 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.583022 | orchestrator | 2026-02-15 03:45:50.583027 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 03:45:50.583036 | orchestrator | Sunday 15 February 2026 03:45:48 +0000 (0:00:00.912) 0:02:55.612 ******* 2026-02-15 03:45:50.583041 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.583046 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.583051 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.583056 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.583061 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.583066 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.583071 | orchestrator | 2026-02-15 03:45:50.583076 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 03:45:50.583081 | orchestrator | Sunday 15 February 2026 03:45:48 +0000 (0:00:00.669) 0:02:56.281 ******* 2026-02-15 03:45:50.583086 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.583091 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.583096 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.583101 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.583106 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.583111 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.583116 | orchestrator | 2026-02-15 03:45:50.583121 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 03:45:50.583126 | orchestrator | Sunday 15 February 2026 03:45:49 +0000 (0:00:01.012) 0:02:57.293 ******* 2026-02-15 03:45:50.583131 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:45:50.583135 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:45:50.583139 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:45:50.583144 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:45:50.583148 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:45:50.583152 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:45:50.583156 | orchestrator | 2026-02-15 03:45:50.583161 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 03:45:50.583173 | orchestrator | Sunday 15 February 2026 03:45:50 +0000 (0:00:00.694) 0:02:57.988 ******* 2026-02-15 03:46:09.635740 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.635834 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:46:09.635844 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:46:09.635852 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:09.635859 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:09.635866 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:09.635874 | orchestrator | 2026-02-15 03:46:09.635882 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 03:46:09.635890 | orchestrator | Sunday 15 February 2026 03:45:51 +0000 (0:00:01.022) 0:02:59.011 ******* 2026-02-15 03:46:09.635897 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:46:09.635905 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:46:09.635912 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:46:09.635919 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:09.635926 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:09.635932 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:09.635939 | orchestrator | 2026-02-15 03:46:09.635947 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 03:46:09.635953 | orchestrator | Sunday 15 February 2026 03:45:52 +0000 (0:00:00.962) 0:02:59.974 ******* 2026-02-15 03:46:09.635961 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:46:09.635968 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:46:09.635975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:46:09.635982 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.635989 | orchestrator | 2026-02-15 03:46:09.635996 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 03:46:09.636003 | orchestrator | Sunday 15 February 2026 03:45:52 +0000 (0:00:00.433) 0:03:00.408 ******* 2026-02-15 03:46:09.636009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:46:09.636016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:46:09.636023 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:46:09.636030 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.636037 | orchestrator | 2026-02-15 03:46:09.636044 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 03:46:09.636051 | orchestrator | Sunday 15 February 2026 03:45:53 +0000 (0:00:00.470) 0:03:00.879 ******* 2026-02-15 03:46:09.636058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:46:09.636064 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:46:09.636071 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:46:09.636078 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.636085 | orchestrator | 2026-02-15 03:46:09.636108 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 03:46:09.636115 | orchestrator | Sunday 15 February 2026 03:45:53 +0000 (0:00:00.415) 0:03:01.295 ******* 2026-02-15 03:46:09.636122 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:46:09.636129 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:46:09.636136 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:46:09.636143 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:09.636150 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:09.636157 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:09.636164 | orchestrator | 2026-02-15 03:46:09.636171 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 03:46:09.636178 | orchestrator | Sunday 15 February 2026 03:45:54 +0000 (0:00:00.694) 0:03:01.990 ******* 2026-02-15 03:46:09.636185 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 03:46:09.636192 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-15 03:46:09.636199 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-15 03:46:09.636206 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-15 03:46:09.636232 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:09.636240 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-15 03:46:09.636247 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:09.636254 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-15 03:46:09.636261 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:09.636268 | orchestrator | 2026-02-15 03:46:09.636275 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 03:46:09.636294 | orchestrator | Sunday 15 February 2026 03:45:56 +0000 (0:00:01.894) 0:03:03.884 ******* 2026-02-15 03:46:09.636303 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:46:09.636311 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:46:09.636318 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:46:09.636326 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:46:09.636334 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:46:09.636343 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:46:09.636350 | orchestrator | 2026-02-15 03:46:09.636359 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 03:46:09.636367 | orchestrator | Sunday 15 February 2026 03:45:59 +0000 (0:00:02.814) 0:03:06.699 ******* 2026-02-15 03:46:09.636375 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:46:09.636383 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:46:09.636391 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:46:09.636399 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:46:09.636407 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:46:09.636415 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:46:09.636423 | orchestrator | 2026-02-15 03:46:09.636431 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-15 03:46:09.636439 | orchestrator | Sunday 15 February 2026 03:46:00 +0000 (0:00:01.069) 0:03:07.768 ******* 2026-02-15 03:46:09.636448 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.636456 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:46:09.636463 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:46:09.636472 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:46:09.636480 | orchestrator | 2026-02-15 03:46:09.636488 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-15 03:46:09.636497 | orchestrator | Sunday 15 February 2026 03:46:01 +0000 (0:00:01.371) 0:03:09.139 ******* 2026-02-15 03:46:09.636504 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:09.636525 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:09.636533 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:09.636541 | orchestrator | 2026-02-15 03:46:09.636565 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-15 03:46:09.636573 | orchestrator | Sunday 15 February 2026 03:46:02 +0000 (0:00:00.359) 0:03:09.499 ******* 2026-02-15 03:46:09.636581 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:46:09.636589 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:46:09.636598 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:46:09.636605 | orchestrator | 2026-02-15 03:46:09.636613 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-15 03:46:09.636621 | orchestrator | Sunday 15 February 2026 03:46:03 +0000 (0:00:01.526) 0:03:11.025 ******* 2026-02-15 03:46:09.636630 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 03:46:09.636637 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 03:46:09.636645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 03:46:09.636652 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:09.636659 | orchestrator | 2026-02-15 03:46:09.636666 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-15 03:46:09.636673 | orchestrator | Sunday 15 February 2026 03:46:04 +0000 (0:00:00.708) 0:03:11.733 ******* 2026-02-15 03:46:09.636680 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:09.636693 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:09.636700 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:09.636707 | orchestrator | 2026-02-15 03:46:09.636713 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-15 03:46:09.636720 | orchestrator | Sunday 15 February 2026 03:46:04 +0000 (0:00:00.387) 0:03:12.121 ******* 2026-02-15 03:46:09.636727 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:09.636734 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:09.636741 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:09.636748 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:46:09.636755 | orchestrator | 2026-02-15 03:46:09.636762 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-15 03:46:09.636769 | orchestrator | Sunday 15 February 2026 03:46:05 +0000 (0:00:01.177) 0:03:13.299 ******* 2026-02-15 03:46:09.636775 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:46:09.636782 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:46:09.636789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:46:09.636796 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.636803 | orchestrator | 2026-02-15 03:46:09.636810 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-15 03:46:09.636817 | orchestrator | Sunday 15 February 2026 03:46:06 +0000 (0:00:00.452) 0:03:13.751 ******* 2026-02-15 03:46:09.636823 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.636830 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:46:09.636837 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:46:09.636844 | orchestrator | 2026-02-15 03:46:09.636851 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-15 03:46:09.636858 | orchestrator | Sunday 15 February 2026 03:46:06 +0000 (0:00:00.365) 0:03:14.116 ******* 2026-02-15 03:46:09.636864 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.636871 | orchestrator | 2026-02-15 03:46:09.636878 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-15 03:46:09.636885 | orchestrator | Sunday 15 February 2026 03:46:06 +0000 (0:00:00.247) 0:03:14.364 ******* 2026-02-15 03:46:09.636892 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.636899 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:46:09.636905 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:46:09.636912 | orchestrator | 2026-02-15 03:46:09.636919 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-15 03:46:09.636926 | orchestrator | Sunday 15 February 2026 03:46:07 +0000 (0:00:00.344) 0:03:14.708 ******* 2026-02-15 03:46:09.636933 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.636940 | orchestrator | 2026-02-15 03:46:09.636946 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-15 03:46:09.636957 | orchestrator | Sunday 15 February 2026 03:46:08 +0000 (0:00:00.725) 0:03:15.434 ******* 2026-02-15 03:46:09.636964 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.636971 | orchestrator | 2026-02-15 03:46:09.636978 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-15 03:46:09.636985 | orchestrator | Sunday 15 February 2026 03:46:08 +0000 (0:00:00.248) 0:03:15.683 ******* 2026-02-15 03:46:09.636992 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.636998 | orchestrator | 2026-02-15 03:46:09.637005 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-15 03:46:09.637012 | orchestrator | Sunday 15 February 2026 03:46:08 +0000 (0:00:00.148) 0:03:15.831 ******* 2026-02-15 03:46:09.637019 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.637026 | orchestrator | 2026-02-15 03:46:09.637033 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-15 03:46:09.637040 | orchestrator | Sunday 15 February 2026 03:46:08 +0000 (0:00:00.257) 0:03:16.089 ******* 2026-02-15 03:46:09.637047 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.637058 | orchestrator | 2026-02-15 03:46:09.637065 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-15 03:46:09.637072 | orchestrator | Sunday 15 February 2026 03:46:08 +0000 (0:00:00.265) 0:03:16.354 ******* 2026-02-15 03:46:09.637079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:46:09.637086 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:46:09.637093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:46:09.637100 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:09.637107 | orchestrator | 2026-02-15 03:46:09.637114 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-15 03:46:09.637121 | orchestrator | Sunday 15 February 2026 03:46:09 +0000 (0:00:00.490) 0:03:16.845 ******* 2026-02-15 03:46:09.637131 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:30.078628 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:46:30.078744 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:46:30.078761 | orchestrator | 2026-02-15 03:46:30.078775 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-15 03:46:30.078788 | orchestrator | Sunday 15 February 2026 03:46:09 +0000 (0:00:00.344) 0:03:17.190 ******* 2026-02-15 03:46:30.078800 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:30.078811 | orchestrator | 2026-02-15 03:46:30.078822 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-15 03:46:30.078833 | orchestrator | Sunday 15 February 2026 03:46:10 +0000 (0:00:00.260) 0:03:17.451 ******* 2026-02-15 03:46:30.078844 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:30.078855 | orchestrator | 2026-02-15 03:46:30.078866 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-15 03:46:30.078878 | orchestrator | Sunday 15 February 2026 03:46:10 +0000 (0:00:00.249) 0:03:17.700 ******* 2026-02-15 03:46:30.078889 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:30.078900 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:30.078911 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:30.078923 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:46:30.078934 | orchestrator | 2026-02-15 03:46:30.078946 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-15 03:46:30.078957 | orchestrator | Sunday 15 February 2026 03:46:11 +0000 (0:00:01.189) 0:03:18.890 ******* 2026-02-15 03:46:30.078969 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:46:30.078980 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:46:30.078991 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:46:30.079002 | orchestrator | 2026-02-15 03:46:30.079013 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-15 03:46:30.079025 | orchestrator | Sunday 15 February 2026 03:46:11 +0000 (0:00:00.329) 0:03:19.219 ******* 2026-02-15 03:46:30.079035 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:46:30.079047 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:46:30.079058 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:46:30.079069 | orchestrator | 2026-02-15 03:46:30.079080 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-15 03:46:30.079091 | orchestrator | Sunday 15 February 2026 03:46:13 +0000 (0:00:01.601) 0:03:20.821 ******* 2026-02-15 03:46:30.079102 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:46:30.079114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:46:30.079125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:46:30.079138 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:30.079151 | orchestrator | 2026-02-15 03:46:30.079163 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-15 03:46:30.079177 | orchestrator | Sunday 15 February 2026 03:46:14 +0000 (0:00:00.730) 0:03:21.551 ******* 2026-02-15 03:46:30.079190 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:46:30.079227 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:46:30.079241 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:46:30.079254 | orchestrator | 2026-02-15 03:46:30.079267 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-15 03:46:30.079281 | orchestrator | Sunday 15 February 2026 03:46:14 +0000 (0:00:00.396) 0:03:21.948 ******* 2026-02-15 03:46:30.079293 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:30.079306 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:30.079319 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:30.079333 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:46:30.079346 | orchestrator | 2026-02-15 03:46:30.079359 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-15 03:46:30.079372 | orchestrator | Sunday 15 February 2026 03:46:15 +0000 (0:00:01.197) 0:03:23.145 ******* 2026-02-15 03:46:30.079385 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:46:30.079398 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:46:30.079411 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:46:30.079423 | orchestrator | 2026-02-15 03:46:30.079436 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-15 03:46:30.079489 | orchestrator | Sunday 15 February 2026 03:46:16 +0000 (0:00:00.436) 0:03:23.581 ******* 2026-02-15 03:46:30.079503 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:46:30.079516 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:46:30.079530 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:46:30.079542 | orchestrator | 2026-02-15 03:46:30.079571 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-15 03:46:30.079583 | orchestrator | Sunday 15 February 2026 03:46:17 +0000 (0:00:01.396) 0:03:24.978 ******* 2026-02-15 03:46:30.079594 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:46:30.079605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:46:30.079616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:46:30.079627 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:30.079637 | orchestrator | 2026-02-15 03:46:30.079648 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-15 03:46:30.079660 | orchestrator | Sunday 15 February 2026 03:46:18 +0000 (0:00:00.921) 0:03:25.900 ******* 2026-02-15 03:46:30.079671 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:46:30.079682 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:46:30.079692 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:46:30.079703 | orchestrator | 2026-02-15 03:46:30.079714 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-15 03:46:30.079725 | orchestrator | Sunday 15 February 2026 03:46:19 +0000 (0:00:00.613) 0:03:26.513 ******* 2026-02-15 03:46:30.079736 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:30.079746 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:46:30.079757 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:46:30.079768 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:30.079779 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:30.079790 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:30.079801 | orchestrator | 2026-02-15 03:46:30.079829 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-15 03:46:30.079841 | orchestrator | Sunday 15 February 2026 03:46:19 +0000 (0:00:00.762) 0:03:27.275 ******* 2026-02-15 03:46:30.079852 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:46:30.079863 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:46:30.079873 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:46:30.079884 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:46:30.079895 | orchestrator | 2026-02-15 03:46:30.079906 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-15 03:46:30.079917 | orchestrator | Sunday 15 February 2026 03:46:21 +0000 (0:00:01.315) 0:03:28.590 ******* 2026-02-15 03:46:30.079940 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:30.079951 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:30.079962 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:30.079973 | orchestrator | 2026-02-15 03:46:30.079984 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-15 03:46:30.079995 | orchestrator | Sunday 15 February 2026 03:46:21 +0000 (0:00:00.395) 0:03:28.986 ******* 2026-02-15 03:46:30.080006 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:46:30.080016 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:46:30.080027 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:46:30.080038 | orchestrator | 2026-02-15 03:46:30.080049 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-15 03:46:30.080060 | orchestrator | Sunday 15 February 2026 03:46:22 +0000 (0:00:01.280) 0:03:30.267 ******* 2026-02-15 03:46:30.080071 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 03:46:30.080082 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 03:46:30.080093 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 03:46:30.080104 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:30.080114 | orchestrator | 2026-02-15 03:46:30.080125 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-15 03:46:30.080136 | orchestrator | Sunday 15 February 2026 03:46:23 +0000 (0:00:00.960) 0:03:31.227 ******* 2026-02-15 03:46:30.080147 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:30.080158 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:30.080168 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:30.080179 | orchestrator | 2026-02-15 03:46:30.080190 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-02-15 03:46:30.080201 | orchestrator | 2026-02-15 03:46:30.080211 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 03:46:30.080222 | orchestrator | Sunday 15 February 2026 03:46:24 +0000 (0:00:00.942) 0:03:32.170 ******* 2026-02-15 03:46:30.080234 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:46:30.080246 | orchestrator | 2026-02-15 03:46:30.080257 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 03:46:30.080268 | orchestrator | Sunday 15 February 2026 03:46:25 +0000 (0:00:00.853) 0:03:33.024 ******* 2026-02-15 03:46:30.080279 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:46:30.080290 | orchestrator | 2026-02-15 03:46:30.080301 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 03:46:30.080312 | orchestrator | Sunday 15 February 2026 03:46:26 +0000 (0:00:00.590) 0:03:33.614 ******* 2026-02-15 03:46:30.080322 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:30.080333 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:30.080344 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:30.080355 | orchestrator | 2026-02-15 03:46:30.080365 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 03:46:30.080376 | orchestrator | Sunday 15 February 2026 03:46:26 +0000 (0:00:00.778) 0:03:34.393 ******* 2026-02-15 03:46:30.080387 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:30.080398 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:30.080409 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:30.080420 | orchestrator | 2026-02-15 03:46:30.080431 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 03:46:30.080447 | orchestrator | Sunday 15 February 2026 03:46:27 +0000 (0:00:00.614) 0:03:35.008 ******* 2026-02-15 03:46:30.080458 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:30.080469 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:30.080480 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:30.080491 | orchestrator | 2026-02-15 03:46:30.080502 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 03:46:30.080520 | orchestrator | Sunday 15 February 2026 03:46:27 +0000 (0:00:00.363) 0:03:35.371 ******* 2026-02-15 03:46:30.080531 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:30.080542 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:30.080593 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:30.080606 | orchestrator | 2026-02-15 03:46:30.080617 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 03:46:30.080628 | orchestrator | Sunday 15 February 2026 03:46:28 +0000 (0:00:00.367) 0:03:35.738 ******* 2026-02-15 03:46:30.080639 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:30.080650 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:30.080661 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:30.080672 | orchestrator | 2026-02-15 03:46:30.080683 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 03:46:30.080694 | orchestrator | Sunday 15 February 2026 03:46:29 +0000 (0:00:00.774) 0:03:36.513 ******* 2026-02-15 03:46:30.080704 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:30.080715 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:30.080726 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:30.080737 | orchestrator | 2026-02-15 03:46:30.080748 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 03:46:30.080759 | orchestrator | Sunday 15 February 2026 03:46:29 +0000 (0:00:00.616) 0:03:37.129 ******* 2026-02-15 03:46:30.080770 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:30.080781 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:30.080799 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:52.932927 | orchestrator | 2026-02-15 03:46:52.933030 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 03:46:52.933044 | orchestrator | Sunday 15 February 2026 03:46:30 +0000 (0:00:00.352) 0:03:37.482 ******* 2026-02-15 03:46:52.933055 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.933065 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.933073 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.933082 | orchestrator | 2026-02-15 03:46:52.933091 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 03:46:52.933100 | orchestrator | Sunday 15 February 2026 03:46:30 +0000 (0:00:00.710) 0:03:38.192 ******* 2026-02-15 03:46:52.933109 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.933118 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.933126 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.933135 | orchestrator | 2026-02-15 03:46:52.933144 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 03:46:52.933168 | orchestrator | Sunday 15 February 2026 03:46:31 +0000 (0:00:00.756) 0:03:38.948 ******* 2026-02-15 03:46:52.933187 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:52.933197 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:52.933206 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:52.933215 | orchestrator | 2026-02-15 03:46:52.933224 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 03:46:52.933234 | orchestrator | Sunday 15 February 2026 03:46:32 +0000 (0:00:00.603) 0:03:39.552 ******* 2026-02-15 03:46:52.933242 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.933251 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.933260 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.933269 | orchestrator | 2026-02-15 03:46:52.933278 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 03:46:52.933286 | orchestrator | Sunday 15 February 2026 03:46:32 +0000 (0:00:00.399) 0:03:39.951 ******* 2026-02-15 03:46:52.933295 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:52.933304 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:52.933313 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:52.933321 | orchestrator | 2026-02-15 03:46:52.933330 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 03:46:52.933339 | orchestrator | Sunday 15 February 2026 03:46:32 +0000 (0:00:00.377) 0:03:40.328 ******* 2026-02-15 03:46:52.933369 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:52.933379 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:52.933387 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:52.933396 | orchestrator | 2026-02-15 03:46:52.933405 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 03:46:52.933413 | orchestrator | Sunday 15 February 2026 03:46:33 +0000 (0:00:00.359) 0:03:40.688 ******* 2026-02-15 03:46:52.933422 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:52.933430 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:52.933439 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:52.933448 | orchestrator | 2026-02-15 03:46:52.933456 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 03:46:52.933465 | orchestrator | Sunday 15 February 2026 03:46:33 +0000 (0:00:00.667) 0:03:41.355 ******* 2026-02-15 03:46:52.933474 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:52.933485 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:52.933495 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:52.933505 | orchestrator | 2026-02-15 03:46:52.933515 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 03:46:52.933525 | orchestrator | Sunday 15 February 2026 03:46:34 +0000 (0:00:00.349) 0:03:41.704 ******* 2026-02-15 03:46:52.933536 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:52.933546 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:46:52.933556 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:46:52.933585 | orchestrator | 2026-02-15 03:46:52.933595 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 03:46:52.933605 | orchestrator | Sunday 15 February 2026 03:46:34 +0000 (0:00:00.353) 0:03:42.058 ******* 2026-02-15 03:46:52.933616 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.933625 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.933635 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.933645 | orchestrator | 2026-02-15 03:46:52.933655 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 03:46:52.933678 | orchestrator | Sunday 15 February 2026 03:46:34 +0000 (0:00:00.353) 0:03:42.411 ******* 2026-02-15 03:46:52.933688 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.933698 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.933708 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.933718 | orchestrator | 2026-02-15 03:46:52.933728 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 03:46:52.933738 | orchestrator | Sunday 15 February 2026 03:46:35 +0000 (0:00:00.644) 0:03:43.056 ******* 2026-02-15 03:46:52.933748 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.933758 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.933768 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.933778 | orchestrator | 2026-02-15 03:46:52.933788 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-15 03:46:52.933798 | orchestrator | Sunday 15 February 2026 03:46:36 +0000 (0:00:00.611) 0:03:43.667 ******* 2026-02-15 03:46:52.933809 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.933819 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.933829 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.933837 | orchestrator | 2026-02-15 03:46:52.933846 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-15 03:46:52.933855 | orchestrator | Sunday 15 February 2026 03:46:36 +0000 (0:00:00.363) 0:03:44.030 ******* 2026-02-15 03:46:52.933864 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:46:52.933873 | orchestrator | 2026-02-15 03:46:52.933882 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-15 03:46:52.933891 | orchestrator | Sunday 15 February 2026 03:46:37 +0000 (0:00:01.031) 0:03:45.062 ******* 2026-02-15 03:46:52.933900 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:46:52.933909 | orchestrator | 2026-02-15 03:46:52.933924 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-15 03:46:52.933948 | orchestrator | Sunday 15 February 2026 03:46:37 +0000 (0:00:00.176) 0:03:45.239 ******* 2026-02-15 03:46:52.933958 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-15 03:46:52.933966 | orchestrator | 2026-02-15 03:46:52.933975 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-15 03:46:52.933984 | orchestrator | Sunday 15 February 2026 03:46:38 +0000 (0:00:01.158) 0:03:46.397 ******* 2026-02-15 03:46:52.933993 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.934002 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.934010 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.934074 | orchestrator | 2026-02-15 03:46:52.934083 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-15 03:46:52.934092 | orchestrator | Sunday 15 February 2026 03:46:39 +0000 (0:00:00.422) 0:03:46.820 ******* 2026-02-15 03:46:52.934101 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.934110 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.934118 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.934154 | orchestrator | 2026-02-15 03:46:52.934164 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-15 03:46:52.934173 | orchestrator | Sunday 15 February 2026 03:46:40 +0000 (0:00:00.649) 0:03:47.469 ******* 2026-02-15 03:46:52.934182 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:46:52.934191 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:46:52.934200 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:46:52.934208 | orchestrator | 2026-02-15 03:46:52.934217 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-15 03:46:52.934226 | orchestrator | Sunday 15 February 2026 03:46:41 +0000 (0:00:01.221) 0:03:48.691 ******* 2026-02-15 03:46:52.934235 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:46:52.934244 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:46:52.934252 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:46:52.934261 | orchestrator | 2026-02-15 03:46:52.934270 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-15 03:46:52.934279 | orchestrator | Sunday 15 February 2026 03:46:42 +0000 (0:00:00.830) 0:03:49.521 ******* 2026-02-15 03:46:52.934288 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:46:52.934296 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:46:52.934305 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:46:52.934314 | orchestrator | 2026-02-15 03:46:52.934322 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-15 03:46:52.934331 | orchestrator | Sunday 15 February 2026 03:46:42 +0000 (0:00:00.715) 0:03:50.237 ******* 2026-02-15 03:46:52.934340 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.934349 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.934357 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.934366 | orchestrator | 2026-02-15 03:46:52.934375 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-15 03:46:52.934384 | orchestrator | Sunday 15 February 2026 03:46:43 +0000 (0:00:00.998) 0:03:51.235 ******* 2026-02-15 03:46:52.934393 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:46:52.934401 | orchestrator | 2026-02-15 03:46:52.934410 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-15 03:46:52.934419 | orchestrator | Sunday 15 February 2026 03:46:45 +0000 (0:00:01.397) 0:03:52.633 ******* 2026-02-15 03:46:52.934428 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.934436 | orchestrator | 2026-02-15 03:46:52.934445 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-15 03:46:52.934454 | orchestrator | Sunday 15 February 2026 03:46:46 +0000 (0:00:00.788) 0:03:53.421 ******* 2026-02-15 03:46:52.934463 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-15 03:46:52.934475 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:46:52.934490 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:46:52.934519 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-15 03:46:52.934542 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-15 03:46:52.934556 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-15 03:46:52.934595 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 03:46:52.934609 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-02-15 03:46:52.934631 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 03:46:52.934645 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-15 03:46:52.934660 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-15 03:46:52.934675 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-15 03:46:52.934689 | orchestrator | 2026-02-15 03:46:52.934704 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-15 03:46:52.934717 | orchestrator | Sunday 15 February 2026 03:46:49 +0000 (0:00:03.205) 0:03:56.627 ******* 2026-02-15 03:46:52.934732 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:46:52.934747 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:46:52.934763 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:46:52.934777 | orchestrator | 2026-02-15 03:46:52.934792 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-15 03:46:52.934806 | orchestrator | Sunday 15 February 2026 03:46:50 +0000 (0:00:01.239) 0:03:57.866 ******* 2026-02-15 03:46:52.934821 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.934837 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.934852 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.934867 | orchestrator | 2026-02-15 03:46:52.934881 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-15 03:46:52.934896 | orchestrator | Sunday 15 February 2026 03:46:51 +0000 (0:00:00.655) 0:03:58.522 ******* 2026-02-15 03:46:52.934911 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:46:52.934925 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:46:52.934940 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:46:52.934954 | orchestrator | 2026-02-15 03:46:52.934970 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-15 03:46:52.934986 | orchestrator | Sunday 15 February 2026 03:46:51 +0000 (0:00:00.355) 0:03:58.877 ******* 2026-02-15 03:46:52.935001 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:46:52.935017 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:46:52.935030 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:46:52.935044 | orchestrator | 2026-02-15 03:46:52.935071 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-15 03:47:54.318892 | orchestrator | Sunday 15 February 2026 03:46:52 +0000 (0:00:01.454) 0:04:00.332 ******* 2026-02-15 03:47:54.319007 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:47:54.319023 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:47:54.319032 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:47:54.319041 | orchestrator | 2026-02-15 03:47:54.319050 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-15 03:47:54.319058 | orchestrator | Sunday 15 February 2026 03:46:54 +0000 (0:00:01.306) 0:04:01.639 ******* 2026-02-15 03:47:54.319066 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:47:54.319074 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:47:54.319082 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:47:54.319090 | orchestrator | 2026-02-15 03:47:54.319099 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-15 03:47:54.319108 | orchestrator | Sunday 15 February 2026 03:46:54 +0000 (0:00:00.530) 0:04:02.169 ******* 2026-02-15 03:47:54.319118 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:47:54.319128 | orchestrator | 2026-02-15 03:47:54.319137 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-15 03:47:54.319145 | orchestrator | Sunday 15 February 2026 03:46:55 +0000 (0:00:00.564) 0:04:02.734 ******* 2026-02-15 03:47:54.319179 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:47:54.319188 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:47:54.319197 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:47:54.319205 | orchestrator | 2026-02-15 03:47:54.319214 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-15 03:47:54.319223 | orchestrator | Sunday 15 February 2026 03:46:55 +0000 (0:00:00.346) 0:04:03.080 ******* 2026-02-15 03:47:54.319231 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:47:54.319238 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:47:54.319246 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:47:54.319253 | orchestrator | 2026-02-15 03:47:54.319261 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-15 03:47:54.319269 | orchestrator | Sunday 15 February 2026 03:46:56 +0000 (0:00:00.468) 0:04:03.549 ******* 2026-02-15 03:47:54.319276 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:47:54.319286 | orchestrator | 2026-02-15 03:47:54.319294 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-15 03:47:54.319302 | orchestrator | Sunday 15 February 2026 03:46:56 +0000 (0:00:00.546) 0:04:04.096 ******* 2026-02-15 03:47:54.319309 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:47:54.319317 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:47:54.319326 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:47:54.319334 | orchestrator | 2026-02-15 03:47:54.319341 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-15 03:47:54.319349 | orchestrator | Sunday 15 February 2026 03:46:58 +0000 (0:00:01.881) 0:04:05.977 ******* 2026-02-15 03:47:54.319358 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:47:54.319366 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:47:54.319374 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:47:54.319381 | orchestrator | 2026-02-15 03:47:54.319389 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-15 03:47:54.319397 | orchestrator | Sunday 15 February 2026 03:47:00 +0000 (0:00:01.561) 0:04:07.539 ******* 2026-02-15 03:47:54.319405 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:47:54.319413 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:47:54.319420 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:47:54.319429 | orchestrator | 2026-02-15 03:47:54.319436 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-15 03:47:54.319444 | orchestrator | Sunday 15 February 2026 03:47:02 +0000 (0:00:01.881) 0:04:09.420 ******* 2026-02-15 03:47:54.319452 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:47:54.319461 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:47:54.319468 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:47:54.319476 | orchestrator | 2026-02-15 03:47:54.319484 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-15 03:47:54.319504 | orchestrator | Sunday 15 February 2026 03:47:03 +0000 (0:00:01.956) 0:04:11.377 ******* 2026-02-15 03:47:54.319512 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:47:54.319521 | orchestrator | 2026-02-15 03:47:54.319530 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-15 03:47:54.319538 | orchestrator | Sunday 15 February 2026 03:47:04 +0000 (0:00:00.929) 0:04:12.306 ******* 2026-02-15 03:47:54.319546 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-15 03:47:54.319554 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:47:54.319563 | orchestrator | 2026-02-15 03:47:54.319572 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-15 03:47:54.319601 | orchestrator | Sunday 15 February 2026 03:47:26 +0000 (0:00:21.855) 0:04:34.162 ******* 2026-02-15 03:47:54.319609 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:47:54.319617 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:47:54.319633 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:47:54.319641 | orchestrator | 2026-02-15 03:47:54.319649 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-15 03:47:54.319657 | orchestrator | Sunday 15 February 2026 03:47:35 +0000 (0:00:08.807) 0:04:42.970 ******* 2026-02-15 03:47:54.319665 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:47:54.319674 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:47:54.319682 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:47:54.319690 | orchestrator | 2026-02-15 03:47:54.319698 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-15 03:47:54.319706 | orchestrator | Sunday 15 February 2026 03:47:35 +0000 (0:00:00.337) 0:04:43.307 ******* 2026-02-15 03:47:54.319735 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d5d5fa28728afbeaf1398164b994b28aae20bb8'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-15 03:47:54.319746 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d5d5fa28728afbeaf1398164b994b28aae20bb8'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-15 03:47:54.319756 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d5d5fa28728afbeaf1398164b994b28aae20bb8'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-15 03:47:54.319766 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d5d5fa28728afbeaf1398164b994b28aae20bb8'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-15 03:47:54.319774 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d5d5fa28728afbeaf1398164b994b28aae20bb8'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-15 03:47:54.319784 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__2d5d5fa28728afbeaf1398164b994b28aae20bb8'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__2d5d5fa28728afbeaf1398164b994b28aae20bb8'}])  2026-02-15 03:47:54.319794 | orchestrator | 2026-02-15 03:47:54.319802 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 03:47:54.319811 | orchestrator | Sunday 15 February 2026 03:47:50 +0000 (0:00:14.413) 0:04:57.721 ******* 2026-02-15 03:47:54.319819 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:47:54.319828 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:47:54.319836 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:47:54.319844 | orchestrator | 2026-02-15 03:47:54.319853 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-15 03:47:54.319860 | orchestrator | Sunday 15 February 2026 03:47:50 +0000 (0:00:00.414) 0:04:58.135 ******* 2026-02-15 03:47:54.319874 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:47:54.319889 | orchestrator | 2026-02-15 03:47:54.319898 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-02-15 03:47:54.319907 | orchestrator | Sunday 15 February 2026 03:47:51 +0000 (0:00:00.862) 0:04:58.998 ******* 2026-02-15 03:47:54.319917 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:47:54.319925 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:47:54.319934 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:47:54.319943 | orchestrator | 2026-02-15 03:47:54.319951 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-02-15 03:47:54.319960 | orchestrator | Sunday 15 February 2026 03:47:51 +0000 (0:00:00.372) 0:04:59.371 ******* 2026-02-15 03:47:54.319968 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:47:54.319976 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:47:54.319984 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:47:54.319992 | orchestrator | 2026-02-15 03:47:54.320000 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-02-15 03:47:54.320008 | orchestrator | Sunday 15 February 2026 03:47:52 +0000 (0:00:00.400) 0:04:59.772 ******* 2026-02-15 03:47:54.320016 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 03:47:54.320024 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 03:47:54.320032 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 03:47:54.320041 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:47:54.320049 | orchestrator | 2026-02-15 03:47:54.320057 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-02-15 03:47:54.320064 | orchestrator | Sunday 15 February 2026 03:47:53 +0000 (0:00:01.055) 0:05:00.828 ******* 2026-02-15 03:47:54.320071 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:47:54.320079 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:47:54.320086 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:47:54.320095 | orchestrator | 2026-02-15 03:47:54.320104 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-02-15 03:47:54.320112 | orchestrator | 2026-02-15 03:47:54.320128 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 03:48:22.315224 | orchestrator | Sunday 15 February 2026 03:47:54 +0000 (0:00:00.886) 0:05:01.714 ******* 2026-02-15 03:48:22.315345 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:48:22.315362 | orchestrator | 2026-02-15 03:48:22.315376 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 03:48:22.315388 | orchestrator | Sunday 15 February 2026 03:47:54 +0000 (0:00:00.629) 0:05:02.344 ******* 2026-02-15 03:48:22.315399 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:48:22.315411 | orchestrator | 2026-02-15 03:48:22.315422 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 03:48:22.315434 | orchestrator | Sunday 15 February 2026 03:47:55 +0000 (0:00:00.842) 0:05:03.186 ******* 2026-02-15 03:48:22.315445 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:48:22.315458 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:48:22.315469 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:48:22.315480 | orchestrator | 2026-02-15 03:48:22.315492 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 03:48:22.315503 | orchestrator | Sunday 15 February 2026 03:47:56 +0000 (0:00:00.785) 0:05:03.971 ******* 2026-02-15 03:48:22.315514 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.315527 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.315538 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.315549 | orchestrator | 2026-02-15 03:48:22.315560 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 03:48:22.315572 | orchestrator | Sunday 15 February 2026 03:47:56 +0000 (0:00:00.389) 0:05:04.361 ******* 2026-02-15 03:48:22.315661 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.315701 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.315713 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.315724 | orchestrator | 2026-02-15 03:48:22.315736 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 03:48:22.315754 | orchestrator | Sunday 15 February 2026 03:47:57 +0000 (0:00:00.634) 0:05:04.995 ******* 2026-02-15 03:48:22.315773 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.315792 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.315809 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.315826 | orchestrator | 2026-02-15 03:48:22.315844 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 03:48:22.315861 | orchestrator | Sunday 15 February 2026 03:47:57 +0000 (0:00:00.368) 0:05:05.364 ******* 2026-02-15 03:48:22.315879 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:48:22.315895 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:48:22.315913 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:48:22.315932 | orchestrator | 2026-02-15 03:48:22.315950 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 03:48:22.315970 | orchestrator | Sunday 15 February 2026 03:47:58 +0000 (0:00:00.837) 0:05:06.202 ******* 2026-02-15 03:48:22.315990 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.316032 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.316063 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.316077 | orchestrator | 2026-02-15 03:48:22.316090 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 03:48:22.316103 | orchestrator | Sunday 15 February 2026 03:47:59 +0000 (0:00:00.347) 0:05:06.549 ******* 2026-02-15 03:48:22.316115 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.316128 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.316140 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.316151 | orchestrator | 2026-02-15 03:48:22.316162 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 03:48:22.316173 | orchestrator | Sunday 15 February 2026 03:47:59 +0000 (0:00:00.633) 0:05:07.183 ******* 2026-02-15 03:48:22.316184 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:48:22.316195 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:48:22.316206 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:48:22.316217 | orchestrator | 2026-02-15 03:48:22.316244 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 03:48:22.316255 | orchestrator | Sunday 15 February 2026 03:48:00 +0000 (0:00:00.760) 0:05:07.943 ******* 2026-02-15 03:48:22.316267 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:48:22.316278 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:48:22.316289 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:48:22.316300 | orchestrator | 2026-02-15 03:48:22.316311 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 03:48:22.316322 | orchestrator | Sunday 15 February 2026 03:48:01 +0000 (0:00:00.769) 0:05:08.713 ******* 2026-02-15 03:48:22.316333 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.316344 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.316355 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.316367 | orchestrator | 2026-02-15 03:48:22.316377 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 03:48:22.316388 | orchestrator | Sunday 15 February 2026 03:48:01 +0000 (0:00:00.337) 0:05:09.051 ******* 2026-02-15 03:48:22.316399 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:48:22.316410 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:48:22.316421 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:48:22.316432 | orchestrator | 2026-02-15 03:48:22.316443 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 03:48:22.316454 | orchestrator | Sunday 15 February 2026 03:48:02 +0000 (0:00:00.674) 0:05:09.726 ******* 2026-02-15 03:48:22.316465 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.316476 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.316497 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.316508 | orchestrator | 2026-02-15 03:48:22.316519 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 03:48:22.316530 | orchestrator | Sunday 15 February 2026 03:48:02 +0000 (0:00:00.353) 0:05:10.080 ******* 2026-02-15 03:48:22.316541 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.316552 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.316563 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.316574 | orchestrator | 2026-02-15 03:48:22.316632 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 03:48:22.316644 | orchestrator | Sunday 15 February 2026 03:48:03 +0000 (0:00:00.369) 0:05:10.449 ******* 2026-02-15 03:48:22.316656 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.316667 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.316678 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.316689 | orchestrator | 2026-02-15 03:48:22.316700 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 03:48:22.316719 | orchestrator | Sunday 15 February 2026 03:48:03 +0000 (0:00:00.376) 0:05:10.825 ******* 2026-02-15 03:48:22.316740 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.316770 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.316788 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.316805 | orchestrator | 2026-02-15 03:48:22.316823 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 03:48:22.316841 | orchestrator | Sunday 15 February 2026 03:48:04 +0000 (0:00:00.696) 0:05:11.522 ******* 2026-02-15 03:48:22.316856 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.316872 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.316889 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.316907 | orchestrator | 2026-02-15 03:48:22.316924 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 03:48:22.316941 | orchestrator | Sunday 15 February 2026 03:48:04 +0000 (0:00:00.364) 0:05:11.886 ******* 2026-02-15 03:48:22.316960 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:48:22.316978 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:48:22.316996 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:48:22.317014 | orchestrator | 2026-02-15 03:48:22.317033 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 03:48:22.317050 | orchestrator | Sunday 15 February 2026 03:48:04 +0000 (0:00:00.370) 0:05:12.257 ******* 2026-02-15 03:48:22.317069 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:48:22.317082 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:48:22.317093 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:48:22.317104 | orchestrator | 2026-02-15 03:48:22.317115 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 03:48:22.317126 | orchestrator | Sunday 15 February 2026 03:48:05 +0000 (0:00:00.384) 0:05:12.641 ******* 2026-02-15 03:48:22.317137 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:48:22.317148 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:48:22.317158 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:48:22.317169 | orchestrator | 2026-02-15 03:48:22.317180 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-15 03:48:22.317191 | orchestrator | Sunday 15 February 2026 03:48:06 +0000 (0:00:00.891) 0:05:13.533 ******* 2026-02-15 03:48:22.317203 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 03:48:22.317214 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 03:48:22.317226 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 03:48:22.317237 | orchestrator | 2026-02-15 03:48:22.317248 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-15 03:48:22.317259 | orchestrator | Sunday 15 February 2026 03:48:06 +0000 (0:00:00.735) 0:05:14.268 ******* 2026-02-15 03:48:22.317270 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:48:22.317294 | orchestrator | 2026-02-15 03:48:22.317305 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-15 03:48:22.317316 | orchestrator | Sunday 15 February 2026 03:48:07 +0000 (0:00:00.597) 0:05:14.866 ******* 2026-02-15 03:48:22.317327 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:48:22.317338 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:48:22.317349 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:48:22.317360 | orchestrator | 2026-02-15 03:48:22.317371 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-15 03:48:22.317382 | orchestrator | Sunday 15 February 2026 03:48:08 +0000 (0:00:01.064) 0:05:15.930 ******* 2026-02-15 03:48:22.317394 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:48:22.317413 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:48:22.317425 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:48:22.317436 | orchestrator | 2026-02-15 03:48:22.317447 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-15 03:48:22.317459 | orchestrator | Sunday 15 February 2026 03:48:08 +0000 (0:00:00.354) 0:05:16.285 ******* 2026-02-15 03:48:22.317470 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-15 03:48:22.317481 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-15 03:48:22.317492 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-15 03:48:22.317503 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-15 03:48:22.317514 | orchestrator | 2026-02-15 03:48:22.317526 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-15 03:48:22.317537 | orchestrator | Sunday 15 February 2026 03:48:19 +0000 (0:00:10.439) 0:05:26.724 ******* 2026-02-15 03:48:22.317548 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:48:22.317559 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:48:22.317570 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:48:22.317612 | orchestrator | 2026-02-15 03:48:22.317627 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-15 03:48:22.317638 | orchestrator | Sunday 15 February 2026 03:48:19 +0000 (0:00:00.376) 0:05:27.100 ******* 2026-02-15 03:48:22.317649 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-15 03:48:22.317660 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-15 03:48:22.317671 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-15 03:48:22.317682 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-15 03:48:22.317693 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:48:22.317704 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:48:22.317714 | orchestrator | 2026-02-15 03:48:22.317726 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-15 03:48:22.317747 | orchestrator | Sunday 15 February 2026 03:48:22 +0000 (0:00:02.607) 0:05:29.708 ******* 2026-02-15 03:49:24.406970 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-15 03:49:24.407081 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-15 03:49:24.407094 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-15 03:49:24.407103 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-15 03:49:24.407113 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-02-15 03:49:24.407121 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-02-15 03:49:24.407130 | orchestrator | 2026-02-15 03:49:24.407142 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-15 03:49:24.407152 | orchestrator | Sunday 15 February 2026 03:48:23 +0000 (0:00:01.288) 0:05:30.996 ******* 2026-02-15 03:49:24.407161 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:49:24.407170 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:49:24.407179 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:49:24.407187 | orchestrator | 2026-02-15 03:49:24.407195 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-15 03:49:24.407204 | orchestrator | Sunday 15 February 2026 03:48:24 +0000 (0:00:00.699) 0:05:31.696 ******* 2026-02-15 03:49:24.407235 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:49:24.407245 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:49:24.407253 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:49:24.407261 | orchestrator | 2026-02-15 03:49:24.407269 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-15 03:49:24.407277 | orchestrator | Sunday 15 February 2026 03:48:24 +0000 (0:00:00.364) 0:05:32.061 ******* 2026-02-15 03:49:24.407285 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:49:24.407293 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:49:24.407301 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:49:24.407309 | orchestrator | 2026-02-15 03:49:24.407318 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-15 03:49:24.407326 | orchestrator | Sunday 15 February 2026 03:48:25 +0000 (0:00:00.609) 0:05:32.671 ******* 2026-02-15 03:49:24.407335 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:49:24.407343 | orchestrator | 2026-02-15 03:49:24.407352 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-15 03:49:24.407360 | orchestrator | Sunday 15 February 2026 03:48:25 +0000 (0:00:00.642) 0:05:33.313 ******* 2026-02-15 03:49:24.407368 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:49:24.407377 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:49:24.407385 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:49:24.407393 | orchestrator | 2026-02-15 03:49:24.407401 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-15 03:49:24.407410 | orchestrator | Sunday 15 February 2026 03:48:26 +0000 (0:00:00.401) 0:05:33.714 ******* 2026-02-15 03:49:24.407418 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:49:24.407425 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:49:24.407432 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:49:24.407439 | orchestrator | 2026-02-15 03:49:24.407447 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-15 03:49:24.407455 | orchestrator | Sunday 15 February 2026 03:48:26 +0000 (0:00:00.678) 0:05:34.393 ******* 2026-02-15 03:49:24.407464 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:49:24.407473 | orchestrator | 2026-02-15 03:49:24.407482 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-15 03:49:24.407490 | orchestrator | Sunday 15 February 2026 03:48:27 +0000 (0:00:00.625) 0:05:35.018 ******* 2026-02-15 03:49:24.407498 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:49:24.407506 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:49:24.407515 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:49:24.407523 | orchestrator | 2026-02-15 03:49:24.407532 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-15 03:49:24.407541 | orchestrator | Sunday 15 February 2026 03:48:28 +0000 (0:00:01.220) 0:05:36.239 ******* 2026-02-15 03:49:24.407563 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:49:24.407573 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:49:24.407580 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:49:24.407588 | orchestrator | 2026-02-15 03:49:24.407631 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-15 03:49:24.407640 | orchestrator | Sunday 15 February 2026 03:48:30 +0000 (0:00:01.538) 0:05:37.777 ******* 2026-02-15 03:49:24.407649 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:49:24.407657 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:49:24.407665 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:49:24.407674 | orchestrator | 2026-02-15 03:49:24.407682 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-15 03:49:24.407691 | orchestrator | Sunday 15 February 2026 03:48:32 +0000 (0:00:01.810) 0:05:39.587 ******* 2026-02-15 03:49:24.407699 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:49:24.407717 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:49:24.407725 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:49:24.407734 | orchestrator | 2026-02-15 03:49:24.407742 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-15 03:49:24.407751 | orchestrator | Sunday 15 February 2026 03:48:34 +0000 (0:00:01.999) 0:05:41.587 ******* 2026-02-15 03:49:24.407758 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:49:24.407766 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:49:24.407774 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-15 03:49:24.407781 | orchestrator | 2026-02-15 03:49:24.407788 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-15 03:49:24.407796 | orchestrator | Sunday 15 February 2026 03:48:34 +0000 (0:00:00.699) 0:05:42.286 ******* 2026-02-15 03:49:24.407804 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-02-15 03:49:24.407813 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-02-15 03:49:24.407843 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-02-15 03:49:24.407853 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-02-15 03:49:24.407861 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-02-15 03:49:24.407870 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-15 03:49:24.407878 | orchestrator | 2026-02-15 03:49:24.407885 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-15 03:49:24.407893 | orchestrator | Sunday 15 February 2026 03:49:05 +0000 (0:00:30.305) 0:06:12.592 ******* 2026-02-15 03:49:24.407900 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-15 03:49:24.407909 | orchestrator | 2026-02-15 03:49:24.407917 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-15 03:49:24.407926 | orchestrator | Sunday 15 February 2026 03:49:06 +0000 (0:00:01.438) 0:06:14.031 ******* 2026-02-15 03:49:24.407933 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:49:24.407940 | orchestrator | 2026-02-15 03:49:24.407949 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-15 03:49:24.407956 | orchestrator | Sunday 15 February 2026 03:49:06 +0000 (0:00:00.366) 0:06:14.397 ******* 2026-02-15 03:49:24.407963 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:49:24.407970 | orchestrator | 2026-02-15 03:49:24.407978 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-15 03:49:24.407986 | orchestrator | Sunday 15 February 2026 03:49:07 +0000 (0:00:00.172) 0:06:14.570 ******* 2026-02-15 03:49:24.407994 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-02-15 03:49:24.408002 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-02-15 03:49:24.408010 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-02-15 03:49:24.408018 | orchestrator | 2026-02-15 03:49:24.408026 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-15 03:49:24.408035 | orchestrator | Sunday 15 February 2026 03:49:13 +0000 (0:00:06.495) 0:06:21.066 ******* 2026-02-15 03:49:24.408042 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-15 03:49:24.408050 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-02-15 03:49:24.408058 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-02-15 03:49:24.408066 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-15 03:49:24.408074 | orchestrator | 2026-02-15 03:49:24.408083 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 03:49:24.408090 | orchestrator | Sunday 15 February 2026 03:49:18 +0000 (0:00:05.194) 0:06:26.260 ******* 2026-02-15 03:49:24.408106 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:49:24.408113 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:49:24.408121 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:49:24.408129 | orchestrator | 2026-02-15 03:49:24.408137 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-15 03:49:24.408145 | orchestrator | Sunday 15 February 2026 03:49:19 +0000 (0:00:00.732) 0:06:26.992 ******* 2026-02-15 03:49:24.408153 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:49:24.408162 | orchestrator | 2026-02-15 03:49:24.408169 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-02-15 03:49:24.408177 | orchestrator | Sunday 15 February 2026 03:49:20 +0000 (0:00:00.632) 0:06:27.625 ******* 2026-02-15 03:49:24.408185 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:49:24.408194 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:49:24.408202 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:49:24.408210 | orchestrator | 2026-02-15 03:49:24.408227 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-02-15 03:49:24.408236 | orchestrator | Sunday 15 February 2026 03:49:20 +0000 (0:00:00.671) 0:06:28.296 ******* 2026-02-15 03:49:24.408245 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:49:24.408252 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:49:24.408260 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:49:24.408268 | orchestrator | 2026-02-15 03:49:24.408276 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-02-15 03:49:24.408285 | orchestrator | Sunday 15 February 2026 03:49:22 +0000 (0:00:01.310) 0:06:29.607 ******* 2026-02-15 03:49:24.408294 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 03:49:24.408301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 03:49:24.408310 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 03:49:24.408317 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:49:24.408325 | orchestrator | 2026-02-15 03:49:24.408332 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-02-15 03:49:24.408341 | orchestrator | Sunday 15 February 2026 03:49:22 +0000 (0:00:00.681) 0:06:30.288 ******* 2026-02-15 03:49:24.408349 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:49:24.408356 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:49:24.408364 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:49:24.408373 | orchestrator | 2026-02-15 03:49:24.408382 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-02-15 03:49:24.408390 | orchestrator | 2026-02-15 03:49:24.408398 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 03:49:24.408406 | orchestrator | Sunday 15 February 2026 03:49:23 +0000 (0:00:00.654) 0:06:30.942 ******* 2026-02-15 03:49:24.408413 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:49:24.408423 | orchestrator | 2026-02-15 03:49:24.408431 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 03:49:24.408451 | orchestrator | Sunday 15 February 2026 03:49:24 +0000 (0:00:00.866) 0:06:31.809 ******* 2026-02-15 03:49:42.103755 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:49:42.103874 | orchestrator | 2026-02-15 03:49:42.103891 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 03:49:42.103905 | orchestrator | Sunday 15 February 2026 03:49:25 +0000 (0:00:00.858) 0:06:32.668 ******* 2026-02-15 03:49:42.103916 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.103929 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.103939 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.103951 | orchestrator | 2026-02-15 03:49:42.103963 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 03:49:42.103999 | orchestrator | Sunday 15 February 2026 03:49:25 +0000 (0:00:00.360) 0:06:33.028 ******* 2026-02-15 03:49:42.104011 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.104023 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.104034 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.104045 | orchestrator | 2026-02-15 03:49:42.104056 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 03:49:42.104067 | orchestrator | Sunday 15 February 2026 03:49:26 +0000 (0:00:00.749) 0:06:33.778 ******* 2026-02-15 03:49:42.104078 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.104089 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.104100 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.104111 | orchestrator | 2026-02-15 03:49:42.104122 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 03:49:42.104133 | orchestrator | Sunday 15 February 2026 03:49:27 +0000 (0:00:00.774) 0:06:34.552 ******* 2026-02-15 03:49:42.104144 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.104155 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.104166 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.104177 | orchestrator | 2026-02-15 03:49:42.104188 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 03:49:42.104199 | orchestrator | Sunday 15 February 2026 03:49:28 +0000 (0:00:00.986) 0:06:35.538 ******* 2026-02-15 03:49:42.104210 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.104221 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.104232 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.104243 | orchestrator | 2026-02-15 03:49:42.104254 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 03:49:42.104265 | orchestrator | Sunday 15 February 2026 03:49:28 +0000 (0:00:00.339) 0:06:35.878 ******* 2026-02-15 03:49:42.104276 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.104287 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.104298 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.104309 | orchestrator | 2026-02-15 03:49:42.104320 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 03:49:42.104331 | orchestrator | Sunday 15 February 2026 03:49:28 +0000 (0:00:00.350) 0:06:36.228 ******* 2026-02-15 03:49:42.104342 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.104353 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.104365 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.104376 | orchestrator | 2026-02-15 03:49:42.104387 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 03:49:42.104398 | orchestrator | Sunday 15 February 2026 03:49:29 +0000 (0:00:00.334) 0:06:36.563 ******* 2026-02-15 03:49:42.104409 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.104420 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.104431 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.104442 | orchestrator | 2026-02-15 03:49:42.104453 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 03:49:42.104464 | orchestrator | Sunday 15 February 2026 03:49:30 +0000 (0:00:01.017) 0:06:37.581 ******* 2026-02-15 03:49:42.104475 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.104486 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.104496 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.104507 | orchestrator | 2026-02-15 03:49:42.104518 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 03:49:42.104544 | orchestrator | Sunday 15 February 2026 03:49:30 +0000 (0:00:00.744) 0:06:38.325 ******* 2026-02-15 03:49:42.104556 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.104567 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.104578 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.104589 | orchestrator | 2026-02-15 03:49:42.104627 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 03:49:42.104639 | orchestrator | Sunday 15 February 2026 03:49:31 +0000 (0:00:00.370) 0:06:38.695 ******* 2026-02-15 03:49:42.104658 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.104669 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.104680 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.104691 | orchestrator | 2026-02-15 03:49:42.104702 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 03:49:42.104713 | orchestrator | Sunday 15 February 2026 03:49:31 +0000 (0:00:00.363) 0:06:39.059 ******* 2026-02-15 03:49:42.104723 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.104734 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.104745 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.104756 | orchestrator | 2026-02-15 03:49:42.104767 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 03:49:42.104779 | orchestrator | Sunday 15 February 2026 03:49:32 +0000 (0:00:00.672) 0:06:39.731 ******* 2026-02-15 03:49:42.104789 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.104800 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.104811 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.104821 | orchestrator | 2026-02-15 03:49:42.104832 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 03:49:42.104843 | orchestrator | Sunday 15 February 2026 03:49:32 +0000 (0:00:00.403) 0:06:40.135 ******* 2026-02-15 03:49:42.104854 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.104865 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.104876 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.104886 | orchestrator | 2026-02-15 03:49:42.104898 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 03:49:42.104909 | orchestrator | Sunday 15 February 2026 03:49:33 +0000 (0:00:00.394) 0:06:40.529 ******* 2026-02-15 03:49:42.104920 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.104947 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.104960 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.104971 | orchestrator | 2026-02-15 03:49:42.104982 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 03:49:42.104993 | orchestrator | Sunday 15 February 2026 03:49:33 +0000 (0:00:00.330) 0:06:40.860 ******* 2026-02-15 03:49:42.105004 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.105015 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.105026 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.105037 | orchestrator | 2026-02-15 03:49:42.105048 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 03:49:42.105059 | orchestrator | Sunday 15 February 2026 03:49:34 +0000 (0:00:00.646) 0:06:41.507 ******* 2026-02-15 03:49:42.105070 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.105081 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.105092 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.105103 | orchestrator | 2026-02-15 03:49:42.105114 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 03:49:42.105125 | orchestrator | Sunday 15 February 2026 03:49:34 +0000 (0:00:00.391) 0:06:41.898 ******* 2026-02-15 03:49:42.105136 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.105147 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.105157 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.105168 | orchestrator | 2026-02-15 03:49:42.105180 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 03:49:42.105191 | orchestrator | Sunday 15 February 2026 03:49:34 +0000 (0:00:00.371) 0:06:42.270 ******* 2026-02-15 03:49:42.105202 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.105212 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.105223 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.105234 | orchestrator | 2026-02-15 03:49:42.105245 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-15 03:49:42.105256 | orchestrator | Sunday 15 February 2026 03:49:35 +0000 (0:00:00.875) 0:06:43.145 ******* 2026-02-15 03:49:42.105267 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.105278 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.105295 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.105307 | orchestrator | 2026-02-15 03:49:42.105318 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-15 03:49:42.105329 | orchestrator | Sunday 15 February 2026 03:49:36 +0000 (0:00:00.358) 0:06:43.504 ******* 2026-02-15 03:49:42.105340 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 03:49:42.105352 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 03:49:42.105363 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 03:49:42.105374 | orchestrator | 2026-02-15 03:49:42.105385 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-15 03:49:42.105396 | orchestrator | Sunday 15 February 2026 03:49:36 +0000 (0:00:00.716) 0:06:44.221 ******* 2026-02-15 03:49:42.105407 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:49:42.105418 | orchestrator | 2026-02-15 03:49:42.105429 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-15 03:49:42.105441 | orchestrator | Sunday 15 February 2026 03:49:37 +0000 (0:00:00.600) 0:06:44.821 ******* 2026-02-15 03:49:42.105451 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.105463 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.105474 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.105484 | orchestrator | 2026-02-15 03:49:42.105496 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-15 03:49:42.105506 | orchestrator | Sunday 15 February 2026 03:49:38 +0000 (0:00:00.652) 0:06:45.474 ******* 2026-02-15 03:49:42.105517 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:49:42.105528 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:49:42.105539 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:49:42.105550 | orchestrator | 2026-02-15 03:49:42.105567 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-15 03:49:42.105578 | orchestrator | Sunday 15 February 2026 03:49:38 +0000 (0:00:00.368) 0:06:45.842 ******* 2026-02-15 03:49:42.105589 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.105624 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.105637 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.105647 | orchestrator | 2026-02-15 03:49:42.105658 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-15 03:49:42.105669 | orchestrator | Sunday 15 February 2026 03:49:39 +0000 (0:00:00.635) 0:06:46.478 ******* 2026-02-15 03:49:42.105680 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:49:42.105691 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:49:42.105701 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:49:42.105712 | orchestrator | 2026-02-15 03:49:42.105723 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-15 03:49:42.105734 | orchestrator | Sunday 15 February 2026 03:49:39 +0000 (0:00:00.669) 0:06:47.147 ******* 2026-02-15 03:49:42.105745 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-15 03:49:42.105756 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-15 03:49:42.105767 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-15 03:49:42.105779 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-15 03:49:42.105792 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-15 03:49:42.105811 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-15 03:49:42.105829 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-15 03:49:42.105864 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-15 03:50:49.165214 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-15 03:50:49.165340 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-15 03:50:49.165353 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-15 03:50:49.165362 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-15 03:50:49.165371 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-15 03:50:49.165380 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-15 03:50:49.165388 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-15 03:50:49.165397 | orchestrator | 2026-02-15 03:50:49.165411 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-15 03:50:49.165425 | orchestrator | Sunday 15 February 2026 03:49:42 +0000 (0:00:02.359) 0:06:49.506 ******* 2026-02-15 03:50:49.165439 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:50:49.165454 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:50:49.165469 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:50:49.165482 | orchestrator | 2026-02-15 03:50:49.165496 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-15 03:50:49.165510 | orchestrator | Sunday 15 February 2026 03:49:42 +0000 (0:00:00.361) 0:06:49.868 ******* 2026-02-15 03:50:49.165524 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:50:49.165534 | orchestrator | 2026-02-15 03:50:49.165542 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-15 03:50:49.165550 | orchestrator | Sunday 15 February 2026 03:49:43 +0000 (0:00:00.940) 0:06:50.809 ******* 2026-02-15 03:50:49.165558 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-15 03:50:49.165566 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-15 03:50:49.165574 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-15 03:50:49.165583 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-15 03:50:49.165591 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-15 03:50:49.165599 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-15 03:50:49.165607 | orchestrator | 2026-02-15 03:50:49.165642 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-15 03:50:49.165651 | orchestrator | Sunday 15 February 2026 03:49:44 +0000 (0:00:01.152) 0:06:51.961 ******* 2026-02-15 03:50:49.165659 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:50:49.165667 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 03:50:49.165675 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 03:50:49.165683 | orchestrator | 2026-02-15 03:50:49.165691 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-15 03:50:49.165698 | orchestrator | Sunday 15 February 2026 03:49:46 +0000 (0:00:02.220) 0:06:54.182 ******* 2026-02-15 03:50:49.165706 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-15 03:50:49.165714 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 03:50:49.165722 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:50:49.165730 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-15 03:50:49.165738 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-15 03:50:49.165745 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:50:49.165753 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-15 03:50:49.165776 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-15 03:50:49.165784 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:50:49.165793 | orchestrator | 2026-02-15 03:50:49.165801 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-15 03:50:49.165829 | orchestrator | Sunday 15 February 2026 03:49:48 +0000 (0:00:01.264) 0:06:55.446 ******* 2026-02-15 03:50:49.165837 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 03:50:49.165845 | orchestrator | 2026-02-15 03:50:49.165853 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-15 03:50:49.165861 | orchestrator | Sunday 15 February 2026 03:49:50 +0000 (0:00:02.079) 0:06:57.525 ******* 2026-02-15 03:50:49.165869 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:50:49.165877 | orchestrator | 2026-02-15 03:50:49.165885 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-02-15 03:50:49.165892 | orchestrator | Sunday 15 February 2026 03:49:51 +0000 (0:00:00.913) 0:06:58.439 ******* 2026-02-15 03:50:49.165902 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'}) 2026-02-15 03:50:49.165911 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'}) 2026-02-15 03:50:49.165919 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'}) 2026-02-15 03:50:49.165927 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'}) 2026-02-15 03:50:49.165949 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'}) 2026-02-15 03:50:49.165957 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'}) 2026-02-15 03:50:49.165965 | orchestrator | 2026-02-15 03:50:49.165973 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-15 03:50:49.165981 | orchestrator | Sunday 15 February 2026 03:50:30 +0000 (0:00:39.453) 0:07:37.892 ******* 2026-02-15 03:50:49.165989 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:50:49.165996 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:50:49.166004 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:50:49.166012 | orchestrator | 2026-02-15 03:50:49.166073 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-15 03:50:49.166081 | orchestrator | Sunday 15 February 2026 03:50:30 +0000 (0:00:00.356) 0:07:38.249 ******* 2026-02-15 03:50:49.166089 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:50:49.166097 | orchestrator | 2026-02-15 03:50:49.166105 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-15 03:50:49.166113 | orchestrator | Sunday 15 February 2026 03:50:31 +0000 (0:00:00.911) 0:07:39.160 ******* 2026-02-15 03:50:49.166121 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:50:49.166129 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:50:49.166138 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:50:49.166146 | orchestrator | 2026-02-15 03:50:49.166153 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-15 03:50:49.166161 | orchestrator | Sunday 15 February 2026 03:50:32 +0000 (0:00:00.709) 0:07:39.870 ******* 2026-02-15 03:50:49.166169 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:50:49.166177 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:50:49.166185 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:50:49.166193 | orchestrator | 2026-02-15 03:50:49.166201 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-15 03:50:49.166209 | orchestrator | Sunday 15 February 2026 03:50:35 +0000 (0:00:02.781) 0:07:42.651 ******* 2026-02-15 03:50:49.166217 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:50:49.166232 | orchestrator | 2026-02-15 03:50:49.166240 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-15 03:50:49.166248 | orchestrator | Sunday 15 February 2026 03:50:36 +0000 (0:00:00.861) 0:07:43.513 ******* 2026-02-15 03:50:49.166256 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:50:49.166263 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:50:49.166271 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:50:49.166279 | orchestrator | 2026-02-15 03:50:49.166287 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-15 03:50:49.166295 | orchestrator | Sunday 15 February 2026 03:50:37 +0000 (0:00:01.297) 0:07:44.810 ******* 2026-02-15 03:50:49.166303 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:50:49.166311 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:50:49.166318 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:50:49.166326 | orchestrator | 2026-02-15 03:50:49.166334 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-15 03:50:49.166342 | orchestrator | Sunday 15 February 2026 03:50:38 +0000 (0:00:01.186) 0:07:45.996 ******* 2026-02-15 03:50:49.166350 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:50:49.166357 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:50:49.166365 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:50:49.166373 | orchestrator | 2026-02-15 03:50:49.166381 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-15 03:50:49.166389 | orchestrator | Sunday 15 February 2026 03:50:41 +0000 (0:00:02.426) 0:07:48.423 ******* 2026-02-15 03:50:49.166396 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:50:49.166409 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:50:49.166417 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:50:49.166425 | orchestrator | 2026-02-15 03:50:49.166433 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-15 03:50:49.166442 | orchestrator | Sunday 15 February 2026 03:50:41 +0000 (0:00:00.382) 0:07:48.806 ******* 2026-02-15 03:50:49.166456 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:50:49.166470 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:50:49.166484 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:50:49.166499 | orchestrator | 2026-02-15 03:50:49.166513 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-15 03:50:49.166528 | orchestrator | Sunday 15 February 2026 03:50:41 +0000 (0:00:00.358) 0:07:49.165 ******* 2026-02-15 03:50:49.166541 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-15 03:50:49.166555 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-15 03:50:49.166564 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-15 03:50:49.166572 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 03:50:49.166580 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-15 03:50:49.166587 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-15 03:50:49.166595 | orchestrator | 2026-02-15 03:50:49.166603 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-15 03:50:49.166611 | orchestrator | Sunday 15 February 2026 03:50:42 +0000 (0:00:01.121) 0:07:50.286 ******* 2026-02-15 03:50:49.166672 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-15 03:50:49.166680 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-15 03:50:49.166688 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-15 03:50:49.166696 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-15 03:50:49.166703 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-15 03:50:49.166711 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-15 03:50:49.166719 | orchestrator | 2026-02-15 03:50:49.166727 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-15 03:50:49.166735 | orchestrator | Sunday 15 February 2026 03:50:45 +0000 (0:00:02.635) 0:07:52.922 ******* 2026-02-15 03:50:49.166743 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-15 03:50:49.166759 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-15 03:51:22.489606 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-15 03:51:22.489812 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-15 03:51:22.489838 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-15 03:51:22.489859 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-15 03:51:22.489872 | orchestrator | 2026-02-15 03:51:22.489885 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-15 03:51:22.489898 | orchestrator | Sunday 15 February 2026 03:50:49 +0000 (0:00:03.650) 0:07:56.572 ******* 2026-02-15 03:51:22.489909 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.489920 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:22.489931 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-15 03:51:22.489942 | orchestrator | 2026-02-15 03:51:22.489954 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-15 03:51:22.489965 | orchestrator | Sunday 15 February 2026 03:50:52 +0000 (0:00:03.280) 0:07:59.852 ******* 2026-02-15 03:51:22.489975 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.489986 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:22.489997 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-15 03:51:22.490009 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-15 03:51:22.490088 | orchestrator | 2026-02-15 03:51:22.490102 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-15 03:51:22.490113 | orchestrator | Sunday 15 February 2026 03:51:04 +0000 (0:00:12.548) 0:08:12.401 ******* 2026-02-15 03:51:22.490897 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.490921 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:22.490934 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:22.490946 | orchestrator | 2026-02-15 03:51:22.490957 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 03:51:22.490969 | orchestrator | Sunday 15 February 2026 03:51:06 +0000 (0:00:01.303) 0:08:13.705 ******* 2026-02-15 03:51:22.490980 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.490991 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:22.491002 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:22.491013 | orchestrator | 2026-02-15 03:51:22.491025 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-15 03:51:22.491036 | orchestrator | Sunday 15 February 2026 03:51:06 +0000 (0:00:00.368) 0:08:14.074 ******* 2026-02-15 03:51:22.491048 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:51:22.491059 | orchestrator | 2026-02-15 03:51:22.491070 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-02-15 03:51:22.491082 | orchestrator | Sunday 15 February 2026 03:51:07 +0000 (0:00:00.916) 0:08:14.990 ******* 2026-02-15 03:51:22.491093 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:51:22.491105 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:51:22.491116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:51:22.491127 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491138 | orchestrator | 2026-02-15 03:51:22.491149 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-02-15 03:51:22.491161 | orchestrator | Sunday 15 February 2026 03:51:08 +0000 (0:00:00.436) 0:08:15.426 ******* 2026-02-15 03:51:22.491172 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491183 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:22.491194 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:22.491205 | orchestrator | 2026-02-15 03:51:22.491217 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-02-15 03:51:22.491228 | orchestrator | Sunday 15 February 2026 03:51:08 +0000 (0:00:00.368) 0:08:15.794 ******* 2026-02-15 03:51:22.491256 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491293 | orchestrator | 2026-02-15 03:51:22.491305 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-02-15 03:51:22.491316 | orchestrator | Sunday 15 February 2026 03:51:08 +0000 (0:00:00.248) 0:08:16.043 ******* 2026-02-15 03:51:22.491327 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491338 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:22.491349 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:22.491360 | orchestrator | 2026-02-15 03:51:22.491371 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-02-15 03:51:22.491383 | orchestrator | Sunday 15 February 2026 03:51:09 +0000 (0:00:00.643) 0:08:16.686 ******* 2026-02-15 03:51:22.491394 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491404 | orchestrator | 2026-02-15 03:51:22.491415 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-02-15 03:51:22.491426 | orchestrator | Sunday 15 February 2026 03:51:09 +0000 (0:00:00.255) 0:08:16.942 ******* 2026-02-15 03:51:22.491437 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491448 | orchestrator | 2026-02-15 03:51:22.491459 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-02-15 03:51:22.491470 | orchestrator | Sunday 15 February 2026 03:51:09 +0000 (0:00:00.260) 0:08:17.202 ******* 2026-02-15 03:51:22.491481 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491492 | orchestrator | 2026-02-15 03:51:22.491503 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-02-15 03:51:22.491514 | orchestrator | Sunday 15 February 2026 03:51:09 +0000 (0:00:00.141) 0:08:17.344 ******* 2026-02-15 03:51:22.491525 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491537 | orchestrator | 2026-02-15 03:51:22.491547 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-02-15 03:51:22.491558 | orchestrator | Sunday 15 February 2026 03:51:10 +0000 (0:00:00.267) 0:08:17.611 ******* 2026-02-15 03:51:22.491569 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491580 | orchestrator | 2026-02-15 03:51:22.491591 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-02-15 03:51:22.491602 | orchestrator | Sunday 15 February 2026 03:51:10 +0000 (0:00:00.296) 0:08:17.908 ******* 2026-02-15 03:51:22.491661 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:51:22.491674 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:51:22.491685 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:51:22.491695 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491706 | orchestrator | 2026-02-15 03:51:22.491717 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-02-15 03:51:22.491728 | orchestrator | Sunday 15 February 2026 03:51:10 +0000 (0:00:00.454) 0:08:18.363 ******* 2026-02-15 03:51:22.491739 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491749 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:22.491760 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:22.491771 | orchestrator | 2026-02-15 03:51:22.491781 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-02-15 03:51:22.491792 | orchestrator | Sunday 15 February 2026 03:51:11 +0000 (0:00:00.359) 0:08:18.722 ******* 2026-02-15 03:51:22.491803 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491813 | orchestrator | 2026-02-15 03:51:22.491824 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-02-15 03:51:22.491835 | orchestrator | Sunday 15 February 2026 03:51:11 +0000 (0:00:00.253) 0:08:18.975 ******* 2026-02-15 03:51:22.491846 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.491857 | orchestrator | 2026-02-15 03:51:22.491867 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-02-15 03:51:22.491878 | orchestrator | 2026-02-15 03:51:22.491889 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 03:51:22.491899 | orchestrator | Sunday 15 February 2026 03:51:12 +0000 (0:00:01.398) 0:08:20.374 ******* 2026-02-15 03:51:22.491953 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:51:22.491967 | orchestrator | 2026-02-15 03:51:22.491978 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 03:51:22.491989 | orchestrator | Sunday 15 February 2026 03:51:14 +0000 (0:00:01.630) 0:08:22.005 ******* 2026-02-15 03:51:22.492000 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:51:22.492011 | orchestrator | 2026-02-15 03:51:22.492022 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 03:51:22.492033 | orchestrator | Sunday 15 February 2026 03:51:16 +0000 (0:00:01.528) 0:08:23.534 ******* 2026-02-15 03:51:22.492043 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.492054 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:22.492066 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:22.492077 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:51:22.492088 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:51:22.492099 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:51:22.492110 | orchestrator | 2026-02-15 03:51:22.492121 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 03:51:22.492132 | orchestrator | Sunday 15 February 2026 03:51:17 +0000 (0:00:01.429) 0:08:24.963 ******* 2026-02-15 03:51:22.492142 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:22.492153 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:22.492164 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:22.492175 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:22.492186 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:22.492197 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:22.492208 | orchestrator | 2026-02-15 03:51:22.492219 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 03:51:22.492230 | orchestrator | Sunday 15 February 2026 03:51:18 +0000 (0:00:00.794) 0:08:25.757 ******* 2026-02-15 03:51:22.492248 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:22.492259 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:22.492270 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:22.492281 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:22.492292 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:22.492303 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:22.492314 | orchestrator | 2026-02-15 03:51:22.492325 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 03:51:22.492336 | orchestrator | Sunday 15 February 2026 03:51:19 +0000 (0:00:01.001) 0:08:26.759 ******* 2026-02-15 03:51:22.492347 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:22.492358 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:22.492369 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:22.492381 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:22.492392 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:22.492403 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:22.492414 | orchestrator | 2026-02-15 03:51:22.492425 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 03:51:22.492436 | orchestrator | Sunday 15 February 2026 03:51:20 +0000 (0:00:00.789) 0:08:27.548 ******* 2026-02-15 03:51:22.492447 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.492458 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:22.492470 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:22.492481 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:51:22.492492 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:51:22.492502 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:51:22.492513 | orchestrator | 2026-02-15 03:51:22.492525 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 03:51:22.492536 | orchestrator | Sunday 15 February 2026 03:51:21 +0000 (0:00:01.438) 0:08:28.987 ******* 2026-02-15 03:51:22.492556 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:22.492567 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:22.492578 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:22.492589 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:22.492600 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:22.492611 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:22.492653 | orchestrator | 2026-02-15 03:51:22.492666 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 03:51:22.492677 | orchestrator | Sunday 15 February 2026 03:51:22 +0000 (0:00:00.710) 0:08:29.697 ******* 2026-02-15 03:51:22.492696 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:56.213358 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:56.213510 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:56.213537 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:56.213557 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:56.213576 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:56.213590 | orchestrator | 2026-02-15 03:51:56.213603 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 03:51:56.213615 | orchestrator | Sunday 15 February 2026 03:51:23 +0000 (0:00:00.948) 0:08:30.646 ******* 2026-02-15 03:51:56.213627 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:56.213699 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:56.213711 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:56.213722 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:51:56.213733 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:51:56.213745 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:51:56.213758 | orchestrator | 2026-02-15 03:51:56.213777 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 03:51:56.213806 | orchestrator | Sunday 15 February 2026 03:51:24 +0000 (0:00:01.093) 0:08:31.740 ******* 2026-02-15 03:51:56.213826 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:56.213843 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:56.213861 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:56.213881 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:51:56.213901 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:51:56.213921 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:51:56.213937 | orchestrator | 2026-02-15 03:51:56.213951 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 03:51:56.213964 | orchestrator | Sunday 15 February 2026 03:51:25 +0000 (0:00:01.452) 0:08:33.192 ******* 2026-02-15 03:51:56.213977 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:56.213989 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:56.214002 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:56.214079 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:56.214093 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:56.214106 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:56.214118 | orchestrator | 2026-02-15 03:51:56.214131 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 03:51:56.214143 | orchestrator | Sunday 15 February 2026 03:51:26 +0000 (0:00:00.664) 0:08:33.857 ******* 2026-02-15 03:51:56.214156 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:56.214168 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:56.214181 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:56.214193 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:51:56.214206 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:51:56.214219 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:51:56.214231 | orchestrator | 2026-02-15 03:51:56.214242 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 03:51:56.214254 | orchestrator | Sunday 15 February 2026 03:51:27 +0000 (0:00:01.001) 0:08:34.858 ******* 2026-02-15 03:51:56.214265 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:56.214275 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:56.214286 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:56.214297 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:56.214308 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:56.214347 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:56.214359 | orchestrator | 2026-02-15 03:51:56.214370 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 03:51:56.214381 | orchestrator | Sunday 15 February 2026 03:51:28 +0000 (0:00:00.692) 0:08:35.551 ******* 2026-02-15 03:51:56.214392 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:56.214403 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:56.214413 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:56.214424 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:56.214435 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:56.214446 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:56.214457 | orchestrator | 2026-02-15 03:51:56.214467 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 03:51:56.214478 | orchestrator | Sunday 15 February 2026 03:51:29 +0000 (0:00:00.956) 0:08:36.508 ******* 2026-02-15 03:51:56.214490 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:56.214501 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:56.214512 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:56.214523 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:56.214534 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:56.214544 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:56.214555 | orchestrator | 2026-02-15 03:51:56.214566 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 03:51:56.214581 | orchestrator | Sunday 15 February 2026 03:51:29 +0000 (0:00:00.703) 0:08:37.211 ******* 2026-02-15 03:51:56.214599 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:56.214616 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:56.214657 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:56.214678 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:56.214698 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:56.214717 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:56.214736 | orchestrator | 2026-02-15 03:51:56.214755 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 03:51:56.214769 | orchestrator | Sunday 15 February 2026 03:51:30 +0000 (0:00:00.937) 0:08:38.149 ******* 2026-02-15 03:51:56.214780 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:56.214790 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:56.214801 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:56.214812 | orchestrator | skipping: [testbed-node-0] 2026-02-15 03:51:56.214822 | orchestrator | skipping: [testbed-node-1] 2026-02-15 03:51:56.214833 | orchestrator | skipping: [testbed-node-2] 2026-02-15 03:51:56.214844 | orchestrator | 2026-02-15 03:51:56.214854 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 03:51:56.214866 | orchestrator | Sunday 15 February 2026 03:51:31 +0000 (0:00:00.656) 0:08:38.806 ******* 2026-02-15 03:51:56.214876 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:51:56.214887 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:51:56.214898 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:51:56.214908 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:51:56.214919 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:51:56.214930 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:51:56.214940 | orchestrator | 2026-02-15 03:51:56.214951 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 03:51:56.214984 | orchestrator | Sunday 15 February 2026 03:51:32 +0000 (0:00:00.964) 0:08:39.770 ******* 2026-02-15 03:51:56.214995 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:56.215006 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:56.215017 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:56.215027 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:51:56.215038 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:51:56.215048 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:51:56.215059 | orchestrator | 2026-02-15 03:51:56.215070 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 03:51:56.215081 | orchestrator | Sunday 15 February 2026 03:51:33 +0000 (0:00:00.710) 0:08:40.480 ******* 2026-02-15 03:51:56.215214 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:56.215239 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:56.215250 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:56.215260 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:51:56.215271 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:51:56.215282 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:51:56.215292 | orchestrator | 2026-02-15 03:51:56.215304 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-15 03:51:56.215315 | orchestrator | Sunday 15 February 2026 03:51:34 +0000 (0:00:01.569) 0:08:42.049 ******* 2026-02-15 03:51:56.215326 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 03:51:56.215337 | orchestrator | 2026-02-15 03:51:56.215348 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-15 03:51:56.215358 | orchestrator | Sunday 15 February 2026 03:51:38 +0000 (0:00:04.021) 0:08:46.070 ******* 2026-02-15 03:51:56.215369 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 03:51:56.215380 | orchestrator | 2026-02-15 03:51:56.215391 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-15 03:51:56.215402 | orchestrator | Sunday 15 February 2026 03:51:40 +0000 (0:00:02.330) 0:08:48.401 ******* 2026-02-15 03:51:56.215413 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:51:56.215424 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:51:56.215434 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:51:56.215445 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:51:56.215456 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:51:56.215467 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:51:56.215478 | orchestrator | 2026-02-15 03:51:56.215506 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-15 03:51:56.215528 | orchestrator | Sunday 15 February 2026 03:51:42 +0000 (0:00:01.914) 0:08:50.316 ******* 2026-02-15 03:51:56.215540 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:51:56.215550 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:51:56.215561 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:51:56.215572 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:51:56.215583 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:51:56.215593 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:51:56.215604 | orchestrator | 2026-02-15 03:51:56.215615 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-15 03:51:56.215626 | orchestrator | Sunday 15 February 2026 03:51:44 +0000 (0:00:01.335) 0:08:51.651 ******* 2026-02-15 03:51:56.215681 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:51:56.215703 | orchestrator | 2026-02-15 03:51:56.215722 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-15 03:51:56.215741 | orchestrator | Sunday 15 February 2026 03:51:45 +0000 (0:00:01.490) 0:08:53.142 ******* 2026-02-15 03:51:56.215758 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:51:56.215773 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:51:56.215785 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:51:56.215802 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:51:56.215821 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:51:56.215839 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:51:56.215852 | orchestrator | 2026-02-15 03:51:56.215863 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-15 03:51:56.215881 | orchestrator | Sunday 15 February 2026 03:51:47 +0000 (0:00:01.652) 0:08:54.795 ******* 2026-02-15 03:51:56.215900 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:51:56.215918 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:51:56.215938 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:51:56.215957 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:51:56.215977 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:51:56.215999 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:51:56.216010 | orchestrator | 2026-02-15 03:51:56.216021 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-02-15 03:51:56.216031 | orchestrator | Sunday 15 February 2026 03:51:51 +0000 (0:00:03.979) 0:08:58.774 ******* 2026-02-15 03:51:56.216044 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:51:56.216063 | orchestrator | 2026-02-15 03:51:56.216080 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-02-15 03:51:56.216098 | orchestrator | Sunday 15 February 2026 03:51:52 +0000 (0:00:01.408) 0:09:00.183 ******* 2026-02-15 03:51:56.216116 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:51:56.216134 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:51:56.216152 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:51:56.216171 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:51:56.216189 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:51:56.216207 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:51:56.216226 | orchestrator | 2026-02-15 03:51:56.216245 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-02-15 03:51:56.216260 | orchestrator | Sunday 15 February 2026 03:51:53 +0000 (0:00:00.735) 0:09:00.919 ******* 2026-02-15 03:51:56.216271 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:51:56.216281 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:51:56.216292 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:51:56.216303 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:51:56.216314 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:51:56.216324 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:51:56.216335 | orchestrator | 2026-02-15 03:51:56.216346 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-02-15 03:51:56.216371 | orchestrator | Sunday 15 February 2026 03:51:56 +0000 (0:00:02.689) 0:09:03.608 ******* 2026-02-15 03:52:25.993756 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.993839 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.993845 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.993849 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:52:25.993854 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:52:25.993858 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:52:25.993863 | orchestrator | 2026-02-15 03:52:25.993868 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-02-15 03:52:25.993873 | orchestrator | 2026-02-15 03:52:25.993878 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 03:52:25.993882 | orchestrator | Sunday 15 February 2026 03:51:57 +0000 (0:00:01.029) 0:09:04.638 ******* 2026-02-15 03:52:25.993886 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:52:25.993892 | orchestrator | 2026-02-15 03:52:25.993896 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 03:52:25.993899 | orchestrator | Sunday 15 February 2026 03:51:58 +0000 (0:00:00.950) 0:09:05.588 ******* 2026-02-15 03:52:25.993903 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:52:25.993907 | orchestrator | 2026-02-15 03:52:25.993911 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 03:52:25.993915 | orchestrator | Sunday 15 February 2026 03:51:58 +0000 (0:00:00.608) 0:09:06.196 ******* 2026-02-15 03:52:25.993919 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:25.993924 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:25.993928 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:25.993932 | orchestrator | 2026-02-15 03:52:25.993936 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 03:52:25.993940 | orchestrator | Sunday 15 February 2026 03:51:59 +0000 (0:00:00.687) 0:09:06.884 ******* 2026-02-15 03:52:25.993943 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.993963 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.993967 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.993971 | orchestrator | 2026-02-15 03:52:25.993975 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 03:52:25.993979 | orchestrator | Sunday 15 February 2026 03:52:00 +0000 (0:00:00.761) 0:09:07.646 ******* 2026-02-15 03:52:25.993983 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.993986 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.993990 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.993994 | orchestrator | 2026-02-15 03:52:25.993998 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 03:52:25.994002 | orchestrator | Sunday 15 February 2026 03:52:00 +0000 (0:00:00.741) 0:09:08.387 ******* 2026-02-15 03:52:25.994005 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.994009 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.994046 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.994051 | orchestrator | 2026-02-15 03:52:25.994055 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 03:52:25.994058 | orchestrator | Sunday 15 February 2026 03:52:02 +0000 (0:00:01.067) 0:09:09.454 ******* 2026-02-15 03:52:25.994062 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:25.994066 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:25.994070 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:25.994074 | orchestrator | 2026-02-15 03:52:25.994078 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 03:52:25.994081 | orchestrator | Sunday 15 February 2026 03:52:02 +0000 (0:00:00.374) 0:09:09.828 ******* 2026-02-15 03:52:25.994085 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:25.994089 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:25.994093 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:25.994097 | orchestrator | 2026-02-15 03:52:25.994100 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 03:52:25.994113 | orchestrator | Sunday 15 February 2026 03:52:02 +0000 (0:00:00.352) 0:09:10.181 ******* 2026-02-15 03:52:25.994117 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:25.994121 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:25.994125 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:25.994129 | orchestrator | 2026-02-15 03:52:25.994149 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 03:52:25.994153 | orchestrator | Sunday 15 February 2026 03:52:03 +0000 (0:00:00.354) 0:09:10.535 ******* 2026-02-15 03:52:25.994157 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.994161 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.994165 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.994169 | orchestrator | 2026-02-15 03:52:25.994172 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 03:52:25.994176 | orchestrator | Sunday 15 February 2026 03:52:04 +0000 (0:00:01.230) 0:09:11.766 ******* 2026-02-15 03:52:25.994180 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.994184 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.994188 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.994191 | orchestrator | 2026-02-15 03:52:25.994195 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 03:52:25.994199 | orchestrator | Sunday 15 February 2026 03:52:05 +0000 (0:00:00.844) 0:09:12.611 ******* 2026-02-15 03:52:25.994203 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:25.994207 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:25.994210 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:25.994214 | orchestrator | 2026-02-15 03:52:25.994218 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 03:52:25.994222 | orchestrator | Sunday 15 February 2026 03:52:05 +0000 (0:00:00.336) 0:09:12.947 ******* 2026-02-15 03:52:25.994226 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:25.994230 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:25.994233 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:25.994241 | orchestrator | 2026-02-15 03:52:25.994245 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 03:52:25.994249 | orchestrator | Sunday 15 February 2026 03:52:05 +0000 (0:00:00.370) 0:09:13.318 ******* 2026-02-15 03:52:25.994253 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.994257 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.994260 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.994264 | orchestrator | 2026-02-15 03:52:25.994278 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 03:52:25.994283 | orchestrator | Sunday 15 February 2026 03:52:06 +0000 (0:00:00.671) 0:09:13.989 ******* 2026-02-15 03:52:25.994287 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.994290 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.994294 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.994298 | orchestrator | 2026-02-15 03:52:25.994302 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 03:52:25.994306 | orchestrator | Sunday 15 February 2026 03:52:06 +0000 (0:00:00.395) 0:09:14.385 ******* 2026-02-15 03:52:25.994309 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.994313 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.994317 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.994321 | orchestrator | 2026-02-15 03:52:25.994324 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 03:52:25.994329 | orchestrator | Sunday 15 February 2026 03:52:07 +0000 (0:00:00.415) 0:09:14.800 ******* 2026-02-15 03:52:25.994333 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:25.994338 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:25.994342 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:25.994347 | orchestrator | 2026-02-15 03:52:25.994351 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 03:52:25.994355 | orchestrator | Sunday 15 February 2026 03:52:07 +0000 (0:00:00.356) 0:09:15.157 ******* 2026-02-15 03:52:25.994360 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:25.994364 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:25.994369 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:25.994373 | orchestrator | 2026-02-15 03:52:25.994377 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 03:52:25.994381 | orchestrator | Sunday 15 February 2026 03:52:08 +0000 (0:00:00.639) 0:09:15.797 ******* 2026-02-15 03:52:25.994386 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:25.994390 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:25.994394 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:25.994399 | orchestrator | 2026-02-15 03:52:25.994403 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 03:52:25.994407 | orchestrator | Sunday 15 February 2026 03:52:08 +0000 (0:00:00.350) 0:09:16.147 ******* 2026-02-15 03:52:25.994411 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.994416 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.994420 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.994425 | orchestrator | 2026-02-15 03:52:25.994429 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 03:52:25.994433 | orchestrator | Sunday 15 February 2026 03:52:09 +0000 (0:00:00.374) 0:09:16.521 ******* 2026-02-15 03:52:25.994438 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:25.994442 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:25.994446 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:25.994450 | orchestrator | 2026-02-15 03:52:25.994455 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-15 03:52:25.994459 | orchestrator | Sunday 15 February 2026 03:52:10 +0000 (0:00:00.902) 0:09:17.424 ******* 2026-02-15 03:52:25.994464 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:25.994468 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:25.994472 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-02-15 03:52:25.994477 | orchestrator | 2026-02-15 03:52:25.994485 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-02-15 03:52:25.994497 | orchestrator | Sunday 15 February 2026 03:52:10 +0000 (0:00:00.588) 0:09:18.013 ******* 2026-02-15 03:52:25.994501 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 03:52:25.994506 | orchestrator | 2026-02-15 03:52:25.994510 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-02-15 03:52:25.994523 | orchestrator | Sunday 15 February 2026 03:52:12 +0000 (0:00:02.148) 0:09:20.161 ******* 2026-02-15 03:52:25.994529 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-02-15 03:52:25.994535 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:25.994539 | orchestrator | 2026-02-15 03:52:25.994543 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-02-15 03:52:25.994548 | orchestrator | Sunday 15 February 2026 03:52:13 +0000 (0:00:00.260) 0:09:20.421 ******* 2026-02-15 03:52:25.994554 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-15 03:52:25.994563 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-15 03:52:25.994568 | orchestrator | 2026-02-15 03:52:25.994573 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-02-15 03:52:25.994577 | orchestrator | Sunday 15 February 2026 03:52:21 +0000 (0:00:08.407) 0:09:28.829 ******* 2026-02-15 03:52:25.994582 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 03:52:25.994586 | orchestrator | 2026-02-15 03:52:25.994590 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-15 03:52:25.994595 | orchestrator | Sunday 15 February 2026 03:52:25 +0000 (0:00:03.673) 0:09:32.502 ******* 2026-02-15 03:52:25.994599 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:52:25.994604 | orchestrator | 2026-02-15 03:52:25.994610 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-15 03:52:54.562801 | orchestrator | Sunday 15 February 2026 03:52:25 +0000 (0:00:00.891) 0:09:33.394 ******* 2026-02-15 03:52:54.562945 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-15 03:52:54.562971 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-15 03:52:54.562989 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-15 03:52:54.563006 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-15 03:52:54.563023 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-15 03:52:54.563040 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-15 03:52:54.563058 | orchestrator | 2026-02-15 03:52:54.563075 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-15 03:52:54.563092 | orchestrator | Sunday 15 February 2026 03:52:27 +0000 (0:00:01.082) 0:09:34.476 ******* 2026-02-15 03:52:54.563107 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:52:54.563123 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 03:52:54.563141 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 03:52:54.563160 | orchestrator | 2026-02-15 03:52:54.563179 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-15 03:52:54.563231 | orchestrator | Sunday 15 February 2026 03:52:29 +0000 (0:00:02.104) 0:09:36.580 ******* 2026-02-15 03:52:54.563248 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-15 03:52:54.563265 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 03:52:54.563282 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:52:54.563300 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-15 03:52:54.563315 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-15 03:52:54.563331 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:52:54.563348 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-15 03:52:54.563363 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-15 03:52:54.563377 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:52:54.563392 | orchestrator | 2026-02-15 03:52:54.563407 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-15 03:52:54.563424 | orchestrator | Sunday 15 February 2026 03:52:30 +0000 (0:00:01.287) 0:09:37.868 ******* 2026-02-15 03:52:54.563440 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:52:54.563457 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:52:54.563471 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:52:54.563488 | orchestrator | 2026-02-15 03:52:54.563505 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-15 03:52:54.563522 | orchestrator | Sunday 15 February 2026 03:52:33 +0000 (0:00:03.177) 0:09:41.045 ******* 2026-02-15 03:52:54.563539 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:54.563553 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:54.563570 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:54.563586 | orchestrator | 2026-02-15 03:52:54.563602 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-15 03:52:54.563619 | orchestrator | Sunday 15 February 2026 03:52:33 +0000 (0:00:00.371) 0:09:41.417 ******* 2026-02-15 03:52:54.563636 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:52:54.563689 | orchestrator | 2026-02-15 03:52:54.563707 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-15 03:52:54.563744 | orchestrator | Sunday 15 February 2026 03:52:34 +0000 (0:00:00.915) 0:09:42.333 ******* 2026-02-15 03:52:54.563763 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:52:54.563782 | orchestrator | 2026-02-15 03:52:54.563798 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-15 03:52:54.563814 | orchestrator | Sunday 15 February 2026 03:52:35 +0000 (0:00:00.616) 0:09:42.949 ******* 2026-02-15 03:52:54.563831 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:52:54.563847 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:52:54.563864 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:52:54.563876 | orchestrator | 2026-02-15 03:52:54.563886 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-15 03:52:54.563896 | orchestrator | Sunday 15 February 2026 03:52:36 +0000 (0:00:01.368) 0:09:44.318 ******* 2026-02-15 03:52:54.563905 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:52:54.563915 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:52:54.563924 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:52:54.563934 | orchestrator | 2026-02-15 03:52:54.563944 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-15 03:52:54.563953 | orchestrator | Sunday 15 February 2026 03:52:38 +0000 (0:00:01.580) 0:09:45.898 ******* 2026-02-15 03:52:54.563963 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:52:54.563973 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:52:54.563982 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:52:54.563992 | orchestrator | 2026-02-15 03:52:54.564001 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-15 03:52:54.564012 | orchestrator | Sunday 15 February 2026 03:52:40 +0000 (0:00:01.828) 0:09:47.726 ******* 2026-02-15 03:52:54.564035 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:52:54.564045 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:52:54.564054 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:52:54.564064 | orchestrator | 2026-02-15 03:52:54.564074 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-15 03:52:54.564083 | orchestrator | Sunday 15 February 2026 03:52:42 +0000 (0:00:02.002) 0:09:49.729 ******* 2026-02-15 03:52:54.564093 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:54.564103 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:54.564113 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:54.564122 | orchestrator | 2026-02-15 03:52:54.564132 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 03:52:54.564165 | orchestrator | Sunday 15 February 2026 03:52:43 +0000 (0:00:01.579) 0:09:51.309 ******* 2026-02-15 03:52:54.564176 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:52:54.564186 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:52:54.564195 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:52:54.564205 | orchestrator | 2026-02-15 03:52:54.564214 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-15 03:52:54.564224 | orchestrator | Sunday 15 February 2026 03:52:44 +0000 (0:00:00.797) 0:09:52.106 ******* 2026-02-15 03:52:54.564236 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:52:54.564254 | orchestrator | 2026-02-15 03:52:54.564270 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-02-15 03:52:54.564286 | orchestrator | Sunday 15 February 2026 03:52:45 +0000 (0:00:00.865) 0:09:52.971 ******* 2026-02-15 03:52:54.564302 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:54.564317 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:54.564330 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:54.564346 | orchestrator | 2026-02-15 03:52:54.564362 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-02-15 03:52:54.564380 | orchestrator | Sunday 15 February 2026 03:52:45 +0000 (0:00:00.361) 0:09:53.332 ******* 2026-02-15 03:52:54.564396 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:52:54.564413 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:52:54.564429 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:52:54.564440 | orchestrator | 2026-02-15 03:52:54.564450 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-02-15 03:52:54.564459 | orchestrator | Sunday 15 February 2026 03:52:47 +0000 (0:00:01.319) 0:09:54.652 ******* 2026-02-15 03:52:54.564469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:52:54.564479 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:52:54.564489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:52:54.564499 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:54.564508 | orchestrator | 2026-02-15 03:52:54.564518 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-02-15 03:52:54.564529 | orchestrator | Sunday 15 February 2026 03:52:48 +0000 (0:00:01.039) 0:09:55.692 ******* 2026-02-15 03:52:54.564546 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:54.564563 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:54.564579 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:54.564595 | orchestrator | 2026-02-15 03:52:54.564609 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-15 03:52:54.564624 | orchestrator | 2026-02-15 03:52:54.564676 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 03:52:54.564694 | orchestrator | Sunday 15 February 2026 03:52:49 +0000 (0:00:00.941) 0:09:56.633 ******* 2026-02-15 03:52:54.564712 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:52:54.564730 | orchestrator | 2026-02-15 03:52:54.564747 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 03:52:54.564763 | orchestrator | Sunday 15 February 2026 03:52:49 +0000 (0:00:00.585) 0:09:57.219 ******* 2026-02-15 03:52:54.564792 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:52:54.564807 | orchestrator | 2026-02-15 03:52:54.564823 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 03:52:54.564837 | orchestrator | Sunday 15 February 2026 03:52:50 +0000 (0:00:00.829) 0:09:58.049 ******* 2026-02-15 03:52:54.564863 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:54.564880 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:54.564896 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:54.564913 | orchestrator | 2026-02-15 03:52:54.564929 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 03:52:54.564946 | orchestrator | Sunday 15 February 2026 03:52:51 +0000 (0:00:00.421) 0:09:58.470 ******* 2026-02-15 03:52:54.564963 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:54.564979 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:54.564995 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:54.565012 | orchestrator | 2026-02-15 03:52:54.565028 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 03:52:54.565044 | orchestrator | Sunday 15 February 2026 03:52:51 +0000 (0:00:00.786) 0:09:59.257 ******* 2026-02-15 03:52:54.565062 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:54.565078 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:54.565096 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:54.565112 | orchestrator | 2026-02-15 03:52:54.565129 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 03:52:54.565146 | orchestrator | Sunday 15 February 2026 03:52:52 +0000 (0:00:01.000) 0:10:00.258 ******* 2026-02-15 03:52:54.565161 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:52:54.565178 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:52:54.565195 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:52:54.565211 | orchestrator | 2026-02-15 03:52:54.565226 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 03:52:54.565244 | orchestrator | Sunday 15 February 2026 03:52:53 +0000 (0:00:00.742) 0:10:01.000 ******* 2026-02-15 03:52:54.565260 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:54.565278 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:54.565294 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:54.565309 | orchestrator | 2026-02-15 03:52:54.565326 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 03:52:54.565341 | orchestrator | Sunday 15 February 2026 03:52:53 +0000 (0:00:00.356) 0:10:01.357 ******* 2026-02-15 03:52:54.565356 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:52:54.565374 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:52:54.565391 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:52:54.565406 | orchestrator | 2026-02-15 03:52:54.565421 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 03:52:54.565436 | orchestrator | Sunday 15 February 2026 03:52:54 +0000 (0:00:00.362) 0:10:01.719 ******* 2026-02-15 03:52:54.565473 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:53:17.932251 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:53:17.932370 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:53:17.932402 | orchestrator | 2026-02-15 03:53:17.932423 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 03:53:17.932442 | orchestrator | Sunday 15 February 2026 03:52:55 +0000 (0:00:00.699) 0:10:02.419 ******* 2026-02-15 03:53:17.932458 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:53:17.932475 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:53:17.932491 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:53:17.932507 | orchestrator | 2026-02-15 03:53:17.932523 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 03:53:17.932540 | orchestrator | Sunday 15 February 2026 03:52:55 +0000 (0:00:00.817) 0:10:03.236 ******* 2026-02-15 03:53:17.932557 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:53:17.932604 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:53:17.932624 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:53:17.932641 | orchestrator | 2026-02-15 03:53:17.932682 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 03:53:17.932692 | orchestrator | Sunday 15 February 2026 03:52:56 +0000 (0:00:00.764) 0:10:04.001 ******* 2026-02-15 03:53:17.932702 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:53:17.932712 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:53:17.932722 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:53:17.932732 | orchestrator | 2026-02-15 03:53:17.932743 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 03:53:17.932753 | orchestrator | Sunday 15 February 2026 03:52:56 +0000 (0:00:00.357) 0:10:04.359 ******* 2026-02-15 03:53:17.932763 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:53:17.932773 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:53:17.932783 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:53:17.932792 | orchestrator | 2026-02-15 03:53:17.932803 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 03:53:17.932815 | orchestrator | Sunday 15 February 2026 03:52:57 +0000 (0:00:00.644) 0:10:05.003 ******* 2026-02-15 03:53:17.932826 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:53:17.932837 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:53:17.932848 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:53:17.932859 | orchestrator | 2026-02-15 03:53:17.932870 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 03:53:17.932881 | orchestrator | Sunday 15 February 2026 03:52:58 +0000 (0:00:00.421) 0:10:05.425 ******* 2026-02-15 03:53:17.932892 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:53:17.932903 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:53:17.932914 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:53:17.932925 | orchestrator | 2026-02-15 03:53:17.932936 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 03:53:17.932948 | orchestrator | Sunday 15 February 2026 03:52:58 +0000 (0:00:00.403) 0:10:05.828 ******* 2026-02-15 03:53:17.932958 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:53:17.932969 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:53:17.932981 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:53:17.932992 | orchestrator | 2026-02-15 03:53:17.933003 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 03:53:17.933014 | orchestrator | Sunday 15 February 2026 03:52:58 +0000 (0:00:00.360) 0:10:06.188 ******* 2026-02-15 03:53:17.933025 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:53:17.933036 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:53:17.933047 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:53:17.933059 | orchestrator | 2026-02-15 03:53:17.933069 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 03:53:17.933080 | orchestrator | Sunday 15 February 2026 03:52:59 +0000 (0:00:00.669) 0:10:06.858 ******* 2026-02-15 03:53:17.933091 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:53:17.933102 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:53:17.933128 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:53:17.933140 | orchestrator | 2026-02-15 03:53:17.933152 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 03:53:17.933163 | orchestrator | Sunday 15 February 2026 03:52:59 +0000 (0:00:00.386) 0:10:07.244 ******* 2026-02-15 03:53:17.933174 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:53:17.933185 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:53:17.933195 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:53:17.933204 | orchestrator | 2026-02-15 03:53:17.933214 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 03:53:17.933224 | orchestrator | Sunday 15 February 2026 03:53:00 +0000 (0:00:00.357) 0:10:07.602 ******* 2026-02-15 03:53:17.933234 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:53:17.933243 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:53:17.933253 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:53:17.933270 | orchestrator | 2026-02-15 03:53:17.933281 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 03:53:17.933291 | orchestrator | Sunday 15 February 2026 03:53:00 +0000 (0:00:00.423) 0:10:08.025 ******* 2026-02-15 03:53:17.933300 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:53:17.933310 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:53:17.933320 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:53:17.933329 | orchestrator | 2026-02-15 03:53:17.933339 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-15 03:53:17.933349 | orchestrator | Sunday 15 February 2026 03:53:01 +0000 (0:00:00.946) 0:10:08.972 ******* 2026-02-15 03:53:17.933360 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:53:17.933370 | orchestrator | 2026-02-15 03:53:17.933380 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-15 03:53:17.933390 | orchestrator | Sunday 15 February 2026 03:53:02 +0000 (0:00:00.622) 0:10:09.594 ******* 2026-02-15 03:53:17.933400 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:53:17.933410 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 03:53:17.933419 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 03:53:17.933429 | orchestrator | 2026-02-15 03:53:17.933439 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-15 03:53:17.933449 | orchestrator | Sunday 15 February 2026 03:53:04 +0000 (0:00:02.507) 0:10:12.102 ******* 2026-02-15 03:53:17.933478 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-15 03:53:17.933489 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 03:53:17.933499 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:53:17.933508 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-15 03:53:17.933518 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-15 03:53:17.933528 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:53:17.933537 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-15 03:53:17.933547 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-15 03:53:17.933557 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:53:17.933567 | orchestrator | 2026-02-15 03:53:17.933577 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-15 03:53:17.933587 | orchestrator | Sunday 15 February 2026 03:53:06 +0000 (0:00:01.671) 0:10:13.773 ******* 2026-02-15 03:53:17.933596 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:53:17.933606 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:53:17.933616 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:53:17.933626 | orchestrator | 2026-02-15 03:53:17.933636 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-15 03:53:17.933696 | orchestrator | Sunday 15 February 2026 03:53:06 +0000 (0:00:00.376) 0:10:14.150 ******* 2026-02-15 03:53:17.933714 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:53:17.933730 | orchestrator | 2026-02-15 03:53:17.933746 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-15 03:53:17.933756 | orchestrator | Sunday 15 February 2026 03:53:07 +0000 (0:00:00.581) 0:10:14.732 ******* 2026-02-15 03:53:17.933767 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 03:53:17.933778 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 03:53:17.933788 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 03:53:17.933797 | orchestrator | 2026-02-15 03:53:17.933807 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-15 03:53:17.933825 | orchestrator | Sunday 15 February 2026 03:53:08 +0000 (0:00:01.140) 0:10:15.872 ******* 2026-02-15 03:53:17.933835 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:53:17.933844 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-15 03:53:17.933854 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:53:17.933864 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:53:17.933874 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-15 03:53:17.933889 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-15 03:53:17.933899 | orchestrator | 2026-02-15 03:53:17.933909 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-15 03:53:17.933919 | orchestrator | Sunday 15 February 2026 03:53:12 +0000 (0:00:04.528) 0:10:20.401 ******* 2026-02-15 03:53:17.933928 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:53:17.933938 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 03:53:17.933948 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:53:17.933957 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 03:53:17.933967 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:53:17.933976 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 03:53:17.933986 | orchestrator | 2026-02-15 03:53:17.933996 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-15 03:53:17.934006 | orchestrator | Sunday 15 February 2026 03:53:15 +0000 (0:00:02.323) 0:10:22.724 ******* 2026-02-15 03:53:17.934069 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-15 03:53:17.934082 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:53:17.934092 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-15 03:53:17.934102 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:53:17.934111 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-15 03:53:17.934121 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:53:17.934131 | orchestrator | 2026-02-15 03:53:17.934141 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-15 03:53:17.934151 | orchestrator | Sunday 15 February 2026 03:53:16 +0000 (0:00:01.674) 0:10:24.399 ******* 2026-02-15 03:53:17.934161 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-15 03:53:17.934170 | orchestrator | 2026-02-15 03:53:17.934180 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-15 03:53:17.934190 | orchestrator | Sunday 15 February 2026 03:53:17 +0000 (0:00:00.274) 0:10:24.674 ******* 2026-02-15 03:53:17.934200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 03:53:17.934219 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 03:54:02.951521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 03:54:02.951639 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 03:54:02.951711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 03:54:02.951721 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:02.951754 | orchestrator | 2026-02-15 03:54:02.951764 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-15 03:54:02.951774 | orchestrator | Sunday 15 February 2026 03:53:17 +0000 (0:00:00.661) 0:10:25.336 ******* 2026-02-15 03:54:02.951783 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 03:54:02.951791 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 03:54:02.951800 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 03:54:02.951808 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 03:54:02.951816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 03:54:02.951826 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:02.951840 | orchestrator | 2026-02-15 03:54:02.951854 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-15 03:54:02.951867 | orchestrator | Sunday 15 February 2026 03:53:18 +0000 (0:00:00.655) 0:10:25.991 ******* 2026-02-15 03:54:02.951881 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 03:54:02.951896 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 03:54:02.951911 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 03:54:02.951925 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 03:54:02.951935 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 03:54:02.951943 | orchestrator | 2026-02-15 03:54:02.951965 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-15 03:54:02.951973 | orchestrator | Sunday 15 February 2026 03:53:49 +0000 (0:00:30.974) 0:10:56.966 ******* 2026-02-15 03:54:02.951981 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:02.951989 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:02.951997 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:02.952005 | orchestrator | 2026-02-15 03:54:02.952013 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-15 03:54:02.952021 | orchestrator | Sunday 15 February 2026 03:53:49 +0000 (0:00:00.357) 0:10:57.324 ******* 2026-02-15 03:54:02.952029 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:02.952037 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:02.952045 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:02.952052 | orchestrator | 2026-02-15 03:54:02.952061 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-15 03:54:02.952071 | orchestrator | Sunday 15 February 2026 03:53:50 +0000 (0:00:00.382) 0:10:57.706 ******* 2026-02-15 03:54:02.952081 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:54:02.952090 | orchestrator | 2026-02-15 03:54:02.952099 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-15 03:54:02.952108 | orchestrator | Sunday 15 February 2026 03:53:51 +0000 (0:00:01.011) 0:10:58.717 ******* 2026-02-15 03:54:02.952118 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:54:02.952135 | orchestrator | 2026-02-15 03:54:02.952145 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-15 03:54:02.952154 | orchestrator | Sunday 15 February 2026 03:53:51 +0000 (0:00:00.573) 0:10:59.291 ******* 2026-02-15 03:54:02.952163 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:54:02.952172 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:54:02.952181 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:54:02.952190 | orchestrator | 2026-02-15 03:54:02.952199 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-15 03:54:02.952209 | orchestrator | Sunday 15 February 2026 03:53:53 +0000 (0:00:01.718) 0:11:01.010 ******* 2026-02-15 03:54:02.952218 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:54:02.952227 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:54:02.952237 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:54:02.952246 | orchestrator | 2026-02-15 03:54:02.952261 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-15 03:54:02.952294 | orchestrator | Sunday 15 February 2026 03:53:54 +0000 (0:00:01.208) 0:11:02.219 ******* 2026-02-15 03:54:02.952309 | orchestrator | changed: [testbed-node-3] 2026-02-15 03:54:02.952324 | orchestrator | changed: [testbed-node-4] 2026-02-15 03:54:02.952338 | orchestrator | changed: [testbed-node-5] 2026-02-15 03:54:02.952353 | orchestrator | 2026-02-15 03:54:02.952365 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-15 03:54:02.952373 | orchestrator | Sunday 15 February 2026 03:53:56 +0000 (0:00:01.742) 0:11:03.961 ******* 2026-02-15 03:54:02.952381 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 03:54:02.952389 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 03:54:02.952397 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 03:54:02.952405 | orchestrator | 2026-02-15 03:54:02.952413 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 03:54:02.952421 | orchestrator | Sunday 15 February 2026 03:53:59 +0000 (0:00:02.866) 0:11:06.827 ******* 2026-02-15 03:54:02.952429 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:02.952437 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:02.952444 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:02.952452 | orchestrator | 2026-02-15 03:54:02.952460 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-15 03:54:02.952468 | orchestrator | Sunday 15 February 2026 03:53:59 +0000 (0:00:00.362) 0:11:07.190 ******* 2026-02-15 03:54:02.952475 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:54:02.952483 | orchestrator | 2026-02-15 03:54:02.952491 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-02-15 03:54:02.952499 | orchestrator | Sunday 15 February 2026 03:54:00 +0000 (0:00:00.883) 0:11:08.073 ******* 2026-02-15 03:54:02.952507 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:02.952516 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:02.952523 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:02.952531 | orchestrator | 2026-02-15 03:54:02.952539 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-02-15 03:54:02.952547 | orchestrator | Sunday 15 February 2026 03:54:01 +0000 (0:00:00.411) 0:11:08.484 ******* 2026-02-15 03:54:02.952555 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:02.952562 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:02.952570 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:02.952578 | orchestrator | 2026-02-15 03:54:02.952586 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-02-15 03:54:02.952594 | orchestrator | Sunday 15 February 2026 03:54:01 +0000 (0:00:00.353) 0:11:08.838 ******* 2026-02-15 03:54:02.952609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:54:02.952617 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:54:02.952625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:54:02.952633 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:02.952641 | orchestrator | 2026-02-15 03:54:02.952649 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-02-15 03:54:02.952690 | orchestrator | Sunday 15 February 2026 03:54:02 +0000 (0:00:00.966) 0:11:09.805 ******* 2026-02-15 03:54:02.952699 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:02.952707 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:02.952715 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:02.952722 | orchestrator | 2026-02-15 03:54:02.952730 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:54:02.952738 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-02-15 03:54:02.952747 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-02-15 03:54:02.952755 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-02-15 03:54:02.952763 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-02-15 03:54:02.952771 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-02-15 03:54:02.952779 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-02-15 03:54:02.952787 | orchestrator | 2026-02-15 03:54:02.952795 | orchestrator | 2026-02-15 03:54:02.952803 | orchestrator | 2026-02-15 03:54:02.952811 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:54:02.952819 | orchestrator | Sunday 15 February 2026 03:54:02 +0000 (0:00:00.543) 0:11:10.348 ******* 2026-02-15 03:54:02.952827 | orchestrator | =============================================================================== 2026-02-15 03:54:02.952835 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 54.56s 2026-02-15 03:54:02.952842 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.45s 2026-02-15 03:54:02.952850 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.97s 2026-02-15 03:54:02.952865 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.31s 2026-02-15 03:54:03.463341 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.86s 2026-02-15 03:54:03.463426 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.41s 2026-02-15 03:54:03.463433 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.55s 2026-02-15 03:54:03.463437 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.44s 2026-02-15 03:54:03.463442 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 8.81s 2026-02-15 03:54:03.463446 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.41s 2026-02-15 03:54:03.463450 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.50s 2026-02-15 03:54:03.463455 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 5.97s 2026-02-15 03:54:03.463459 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.19s 2026-02-15 03:54:03.463463 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.53s 2026-02-15 03:54:03.463466 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.02s 2026-02-15 03:54:03.463489 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.98s 2026-02-15 03:54:03.463493 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.67s 2026-02-15 03:54:03.463497 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.65s 2026-02-15 03:54:03.463501 | orchestrator | ceph-container-common : Get ceph version -------------------------------- 3.50s 2026-02-15 03:54:03.463505 | orchestrator | ceph-osd : Unset noup flag ---------------------------------------------- 3.28s 2026-02-15 03:54:06.036311 | orchestrator | 2026-02-15 03:54:06 | INFO  | Task 257f6ac2-a7f4-4855-982e-ac940a1b3ebe (ceph-pools) was prepared for execution. 2026-02-15 03:54:06.036398 | orchestrator | 2026-02-15 03:54:06 | INFO  | It takes a moment until task 257f6ac2-a7f4-4855-982e-ac940a1b3ebe (ceph-pools) has been started and output is visible here. 2026-02-15 03:54:21.206572 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-15 03:54:21.206758 | orchestrator | 2.16.14 2026-02-15 03:54:21.206777 | orchestrator | 2026-02-15 03:54:21.206789 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-02-15 03:54:21.206800 | orchestrator | 2026-02-15 03:54:21.206811 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 03:54:21.206822 | orchestrator | Sunday 15 February 2026 03:54:10 +0000 (0:00:00.650) 0:00:00.650 ******* 2026-02-15 03:54:21.206832 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:54:21.206843 | orchestrator | 2026-02-15 03:54:21.206853 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 03:54:21.206863 | orchestrator | Sunday 15 February 2026 03:54:11 +0000 (0:00:00.721) 0:00:01.371 ******* 2026-02-15 03:54:21.206873 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:21.206884 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:21.206929 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:21.206941 | orchestrator | 2026-02-15 03:54:21.206951 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 03:54:21.206961 | orchestrator | Sunday 15 February 2026 03:54:12 +0000 (0:00:00.706) 0:00:02.078 ******* 2026-02-15 03:54:21.206970 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:21.206980 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:21.206990 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:21.207000 | orchestrator | 2026-02-15 03:54:21.207077 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 03:54:21.207089 | orchestrator | Sunday 15 February 2026 03:54:12 +0000 (0:00:00.338) 0:00:02.416 ******* 2026-02-15 03:54:21.207099 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:21.207109 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:21.207121 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:21.207132 | orchestrator | 2026-02-15 03:54:21.207143 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 03:54:21.207154 | orchestrator | Sunday 15 February 2026 03:54:13 +0000 (0:00:00.882) 0:00:03.299 ******* 2026-02-15 03:54:21.207166 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:21.207177 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:21.207188 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:21.207198 | orchestrator | 2026-02-15 03:54:21.207210 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 03:54:21.207221 | orchestrator | Sunday 15 February 2026 03:54:13 +0000 (0:00:00.338) 0:00:03.637 ******* 2026-02-15 03:54:21.207232 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:21.207242 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:21.207254 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:21.207264 | orchestrator | 2026-02-15 03:54:21.207275 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 03:54:21.207286 | orchestrator | Sunday 15 February 2026 03:54:14 +0000 (0:00:00.327) 0:00:03.965 ******* 2026-02-15 03:54:21.207298 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:21.207331 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:21.207343 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:21.207354 | orchestrator | 2026-02-15 03:54:21.207366 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 03:54:21.207377 | orchestrator | Sunday 15 February 2026 03:54:14 +0000 (0:00:00.334) 0:00:04.299 ******* 2026-02-15 03:54:21.207389 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:21.207401 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:21.207442 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:21.207453 | orchestrator | 2026-02-15 03:54:21.207465 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 03:54:21.207476 | orchestrator | Sunday 15 February 2026 03:54:15 +0000 (0:00:00.582) 0:00:04.882 ******* 2026-02-15 03:54:21.207486 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:21.207495 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:21.207505 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:21.207515 | orchestrator | 2026-02-15 03:54:21.207524 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 03:54:21.207534 | orchestrator | Sunday 15 February 2026 03:54:15 +0000 (0:00:00.330) 0:00:05.212 ******* 2026-02-15 03:54:21.207544 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 03:54:21.207553 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 03:54:21.207563 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 03:54:21.207573 | orchestrator | 2026-02-15 03:54:21.207583 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 03:54:21.207592 | orchestrator | Sunday 15 February 2026 03:54:16 +0000 (0:00:00.709) 0:00:05.922 ******* 2026-02-15 03:54:21.207602 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:21.207612 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:21.207621 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:21.207631 | orchestrator | 2026-02-15 03:54:21.207640 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 03:54:21.207650 | orchestrator | Sunday 15 February 2026 03:54:16 +0000 (0:00:00.511) 0:00:06.433 ******* 2026-02-15 03:54:21.207690 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 03:54:21.207708 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 03:54:21.207726 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 03:54:21.207743 | orchestrator | 2026-02-15 03:54:21.207760 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 03:54:21.207777 | orchestrator | Sunday 15 February 2026 03:54:18 +0000 (0:00:02.263) 0:00:08.697 ******* 2026-02-15 03:54:21.207789 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 03:54:21.207799 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 03:54:21.207809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 03:54:21.207819 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:21.207828 | orchestrator | 2026-02-15 03:54:21.207856 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 03:54:21.207867 | orchestrator | Sunday 15 February 2026 03:54:19 +0000 (0:00:00.695) 0:00:09.393 ******* 2026-02-15 03:54:21.207881 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 03:54:21.207900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 03:54:21.207926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 03:54:21.207955 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:21.207973 | orchestrator | 2026-02-15 03:54:21.207989 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 03:54:21.208002 | orchestrator | Sunday 15 February 2026 03:54:20 +0000 (0:00:01.134) 0:00:10.527 ******* 2026-02-15 03:54:21.208014 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:21.208027 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:21.208037 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:21.208047 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:21.208057 | orchestrator | 2026-02-15 03:54:21.208067 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 03:54:21.208077 | orchestrator | Sunday 15 February 2026 03:54:20 +0000 (0:00:00.195) 0:00:10.723 ******* 2026-02-15 03:54:21.208088 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e40f30e87190', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 03:54:17.618368', 'end': '2026-02-15 03:54:17.662842', 'delta': '0:00:00.044474', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e40f30e87190'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 03:54:21.208102 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3aeb4857506c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 03:54:18.192519', 'end': '2026-02-15 03:54:18.244277', 'delta': '0:00:00.051758', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3aeb4857506c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 03:54:21.208126 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '9cffadff9441', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 03:54:18.781507', 'end': '2026-02-15 03:54:18.828792', 'delta': '0:00:00.047285', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9cffadff9441'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 03:54:28.619473 | orchestrator | 2026-02-15 03:54:28.619573 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 03:54:28.619586 | orchestrator | Sunday 15 February 2026 03:54:21 +0000 (0:00:00.193) 0:00:10.917 ******* 2026-02-15 03:54:28.619594 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:28.619602 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:28.619609 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:28.619616 | orchestrator | 2026-02-15 03:54:28.619623 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 03:54:28.619630 | orchestrator | Sunday 15 February 2026 03:54:21 +0000 (0:00:00.487) 0:00:11.404 ******* 2026-02-15 03:54:28.619641 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-15 03:54:28.619653 | orchestrator | 2026-02-15 03:54:28.619712 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 03:54:28.619728 | orchestrator | Sunday 15 February 2026 03:54:23 +0000 (0:00:01.770) 0:00:13.175 ******* 2026-02-15 03:54:28.619739 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.619749 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.619759 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:28.619770 | orchestrator | 2026-02-15 03:54:28.619780 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 03:54:28.619790 | orchestrator | Sunday 15 February 2026 03:54:23 +0000 (0:00:00.323) 0:00:13.499 ******* 2026-02-15 03:54:28.619799 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.619808 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.619820 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:28.619830 | orchestrator | 2026-02-15 03:54:28.619841 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 03:54:28.619853 | orchestrator | Sunday 15 February 2026 03:54:24 +0000 (0:00:00.925) 0:00:14.425 ******* 2026-02-15 03:54:28.619865 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.619876 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.619887 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:28.619899 | orchestrator | 2026-02-15 03:54:28.619910 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 03:54:28.619922 | orchestrator | Sunday 15 February 2026 03:54:25 +0000 (0:00:00.313) 0:00:14.739 ******* 2026-02-15 03:54:28.619933 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:28.619944 | orchestrator | 2026-02-15 03:54:28.619956 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 03:54:28.619965 | orchestrator | Sunday 15 February 2026 03:54:25 +0000 (0:00:00.141) 0:00:14.880 ******* 2026-02-15 03:54:28.619976 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.619987 | orchestrator | 2026-02-15 03:54:28.619999 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 03:54:28.620010 | orchestrator | Sunday 15 February 2026 03:54:25 +0000 (0:00:00.246) 0:00:15.126 ******* 2026-02-15 03:54:28.620022 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.620034 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.620047 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:28.620058 | orchestrator | 2026-02-15 03:54:28.620067 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 03:54:28.620076 | orchestrator | Sunday 15 February 2026 03:54:25 +0000 (0:00:00.329) 0:00:15.456 ******* 2026-02-15 03:54:28.620084 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.620092 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.620119 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:28.620127 | orchestrator | 2026-02-15 03:54:28.620135 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 03:54:28.620143 | orchestrator | Sunday 15 February 2026 03:54:26 +0000 (0:00:00.352) 0:00:15.808 ******* 2026-02-15 03:54:28.620151 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.620159 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.620167 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:28.620175 | orchestrator | 2026-02-15 03:54:28.620182 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 03:54:28.620190 | orchestrator | Sunday 15 February 2026 03:54:26 +0000 (0:00:00.599) 0:00:16.407 ******* 2026-02-15 03:54:28.620198 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.620206 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.620214 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:28.620221 | orchestrator | 2026-02-15 03:54:28.620229 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 03:54:28.620237 | orchestrator | Sunday 15 February 2026 03:54:27 +0000 (0:00:00.399) 0:00:16.806 ******* 2026-02-15 03:54:28.620245 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.620253 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.620261 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:28.620268 | orchestrator | 2026-02-15 03:54:28.620276 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 03:54:28.620284 | orchestrator | Sunday 15 February 2026 03:54:27 +0000 (0:00:00.339) 0:00:17.146 ******* 2026-02-15 03:54:28.620292 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.620299 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.620307 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:28.620314 | orchestrator | 2026-02-15 03:54:28.620322 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 03:54:28.620331 | orchestrator | Sunday 15 February 2026 03:54:27 +0000 (0:00:00.566) 0:00:17.713 ******* 2026-02-15 03:54:28.620339 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.620347 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.620355 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:28.620363 | orchestrator | 2026-02-15 03:54:28.620370 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 03:54:28.620378 | orchestrator | Sunday 15 February 2026 03:54:28 +0000 (0:00:00.375) 0:00:18.088 ******* 2026-02-15 03:54:28.620416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.620428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.620437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.620452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.620459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.620466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.620473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.620480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.620487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.620504 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.734410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:28.734542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:28.734560 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:28.734605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.734620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:28.734641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.734653 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:28.734726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.734740 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.734752 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.734763 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.734789 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.915361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.915476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.915494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.915532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:28.915579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:28.915595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:28.915614 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:28.915626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:28.915638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:28.915649 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:28.915693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.915704 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.915715 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:28.915741 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:29.172511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:29.172607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:29.172618 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:29.172625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:29.172633 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:29.172640 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-02-15 03:54:29.172772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:29.172825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:29.172836 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:29.172844 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:29.172852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-02-15 03:54:29.172860 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:29.172869 | orchestrator | 2026-02-15 03:54:29.172876 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 03:54:29.172884 | orchestrator | Sunday 15 February 2026 03:54:29 +0000 (0:00:00.698) 0:00:18.787 ******* 2026-02-15 03:54:29.172905 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.311923 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312006 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312016 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312029 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312040 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312091 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312123 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312134 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312145 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312155 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312166 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312188 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.312210 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.469821 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.469925 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.470107 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.470129 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.470142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.470175 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.470188 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-32-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.470216 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.470229 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.470240 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.470253 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:29.470267 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.470299 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597124 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597258 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597285 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597342 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597407 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597473 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:29.597490 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597519 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597533 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597545 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597574 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597592 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:29.597613 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:32.025277 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:32.025388 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:32.025427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:32.025480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:32.025496 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:32.025508 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:32.025529 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-02-15-02-28-37-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-02-15 03:54:32.025542 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:32.025556 | orchestrator | 2026-02-15 03:54:32.025569 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 03:54:32.025581 | orchestrator | Sunday 15 February 2026 03:54:29 +0000 (0:00:00.659) 0:00:19.446 ******* 2026-02-15 03:54:32.025593 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:32.025604 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:32.025616 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:32.025626 | orchestrator | 2026-02-15 03:54:32.025643 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 03:54:32.025689 | orchestrator | Sunday 15 February 2026 03:54:30 +0000 (0:00:00.915) 0:00:20.362 ******* 2026-02-15 03:54:32.025702 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:32.025713 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:32.025724 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:32.025735 | orchestrator | 2026-02-15 03:54:32.025746 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 03:54:32.025758 | orchestrator | Sunday 15 February 2026 03:54:30 +0000 (0:00:00.348) 0:00:20.710 ******* 2026-02-15 03:54:32.025769 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:54:32.025780 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:54:32.025791 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:54:32.025802 | orchestrator | 2026-02-15 03:54:32.025816 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 03:54:32.025828 | orchestrator | Sunday 15 February 2026 03:54:31 +0000 (0:00:00.708) 0:00:21.419 ******* 2026-02-15 03:54:32.025842 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:54:32.025855 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:54:32.025868 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:54:32.025881 | orchestrator | 2026-02-15 03:54:32.025894 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 03:54:32.025914 | orchestrator | Sunday 15 February 2026 03:54:32 +0000 (0:00:00.322) 0:00:21.742 ******* 2026-02-15 03:55:27.882092 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:55:27.882209 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:55:27.882222 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:55:27.882232 | orchestrator | 2026-02-15 03:55:27.882244 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 03:55:27.882255 | orchestrator | Sunday 15 February 2026 03:54:32 +0000 (0:00:00.755) 0:00:22.497 ******* 2026-02-15 03:55:27.882265 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:55:27.882274 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:55:27.882284 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:55:27.882293 | orchestrator | 2026-02-15 03:55:27.882302 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 03:55:27.882335 | orchestrator | Sunday 15 February 2026 03:54:33 +0000 (0:00:00.354) 0:00:22.852 ******* 2026-02-15 03:55:27.882344 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-15 03:55:27.882354 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-15 03:55:27.882364 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-15 03:55:27.882373 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-15 03:55:27.882382 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-15 03:55:27.882392 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-15 03:55:27.882402 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-15 03:55:27.882411 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-15 03:55:27.882421 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-15 03:55:27.882430 | orchestrator | 2026-02-15 03:55:27.882440 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 03:55:27.882449 | orchestrator | Sunday 15 February 2026 03:54:34 +0000 (0:00:01.210) 0:00:24.062 ******* 2026-02-15 03:55:27.882458 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 03:55:27.882468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 03:55:27.882478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 03:55:27.882487 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:55:27.882496 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-15 03:55:27.882506 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-15 03:55:27.882515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-15 03:55:27.882524 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:55:27.882533 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-15 03:55:27.882543 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-15 03:55:27.882553 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-15 03:55:27.882562 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:55:27.882571 | orchestrator | 2026-02-15 03:55:27.882579 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 03:55:27.882589 | orchestrator | Sunday 15 February 2026 03:54:34 +0000 (0:00:00.444) 0:00:24.507 ******* 2026-02-15 03:55:27.882600 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 03:55:27.882610 | orchestrator | 2026-02-15 03:55:27.882620 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 03:55:27.882631 | orchestrator | Sunday 15 February 2026 03:54:35 +0000 (0:00:00.923) 0:00:25.430 ******* 2026-02-15 03:55:27.882641 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:55:27.882650 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:55:27.882660 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:55:27.882689 | orchestrator | 2026-02-15 03:55:27.882698 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 03:55:27.882707 | orchestrator | Sunday 15 February 2026 03:54:36 +0000 (0:00:00.355) 0:00:25.786 ******* 2026-02-15 03:55:27.882716 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:55:27.882725 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:55:27.882733 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:55:27.882742 | orchestrator | 2026-02-15 03:55:27.882751 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 03:55:27.882760 | orchestrator | Sunday 15 February 2026 03:54:36 +0000 (0:00:00.319) 0:00:26.105 ******* 2026-02-15 03:55:27.882769 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:55:27.882777 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:55:27.882787 | orchestrator | skipping: [testbed-node-5] 2026-02-15 03:55:27.882796 | orchestrator | 2026-02-15 03:55:27.882818 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 03:55:27.882835 | orchestrator | Sunday 15 February 2026 03:54:36 +0000 (0:00:00.580) 0:00:26.686 ******* 2026-02-15 03:55:27.882844 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:55:27.882852 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:55:27.882861 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:55:27.882870 | orchestrator | 2026-02-15 03:55:27.882878 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 03:55:27.882887 | orchestrator | Sunday 15 February 2026 03:54:37 +0000 (0:00:00.441) 0:00:27.128 ******* 2026-02-15 03:55:27.882895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:55:27.882904 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:55:27.882913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:55:27.882921 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:55:27.882930 | orchestrator | 2026-02-15 03:55:27.882938 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 03:55:27.882947 | orchestrator | Sunday 15 February 2026 03:54:37 +0000 (0:00:00.400) 0:00:27.528 ******* 2026-02-15 03:55:27.882956 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:55:27.882965 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:55:27.882991 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:55:27.882999 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:55:27.883008 | orchestrator | 2026-02-15 03:55:27.883017 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 03:55:27.883026 | orchestrator | Sunday 15 February 2026 03:54:38 +0000 (0:00:00.426) 0:00:27.955 ******* 2026-02-15 03:55:27.883034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 03:55:27.883043 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 03:55:27.883051 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 03:55:27.883060 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:55:27.883068 | orchestrator | 2026-02-15 03:55:27.883077 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 03:55:27.883086 | orchestrator | Sunday 15 February 2026 03:54:38 +0000 (0:00:00.415) 0:00:28.371 ******* 2026-02-15 03:55:27.883095 | orchestrator | ok: [testbed-node-3] 2026-02-15 03:55:27.883104 | orchestrator | ok: [testbed-node-4] 2026-02-15 03:55:27.883113 | orchestrator | ok: [testbed-node-5] 2026-02-15 03:55:27.883123 | orchestrator | 2026-02-15 03:55:27.883131 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 03:55:27.883141 | orchestrator | Sunday 15 February 2026 03:54:39 +0000 (0:00:00.361) 0:00:28.732 ******* 2026-02-15 03:55:27.883150 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 03:55:27.883159 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-15 03:55:27.883168 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-15 03:55:27.883177 | orchestrator | 2026-02-15 03:55:27.883186 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 03:55:27.883195 | orchestrator | Sunday 15 February 2026 03:54:39 +0000 (0:00:00.832) 0:00:29.565 ******* 2026-02-15 03:55:27.883204 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 03:55:27.883214 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 03:55:27.883223 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 03:55:27.883232 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-15 03:55:27.883241 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 03:55:27.883249 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 03:55:27.883259 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 03:55:27.883274 | orchestrator | 2026-02-15 03:55:27.883284 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 03:55:27.883292 | orchestrator | Sunday 15 February 2026 03:54:40 +0000 (0:00:00.945) 0:00:30.511 ******* 2026-02-15 03:55:27.883302 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 03:55:27.883311 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 03:55:27.883320 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 03:55:27.883329 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-15 03:55:27.883337 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 03:55:27.883346 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 03:55:27.883355 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 03:55:27.883363 | orchestrator | 2026-02-15 03:55:27.883372 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-02-15 03:55:27.883380 | orchestrator | Sunday 15 February 2026 03:54:42 +0000 (0:00:01.817) 0:00:32.329 ******* 2026-02-15 03:55:27.883389 | orchestrator | skipping: [testbed-node-3] 2026-02-15 03:55:27.883398 | orchestrator | skipping: [testbed-node-4] 2026-02-15 03:55:27.883406 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-02-15 03:55:27.883415 | orchestrator | 2026-02-15 03:55:27.883423 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-02-15 03:55:27.883431 | orchestrator | Sunday 15 February 2026 03:54:43 +0000 (0:00:00.423) 0:00:32.752 ******* 2026-02-15 03:55:27.883448 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-15 03:55:27.883460 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-15 03:55:27.883469 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-15 03:55:27.883486 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-15 03:56:20.889444 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-02-15 03:56:20.889558 | orchestrator | 2026-02-15 03:56:20.889575 | orchestrator | TASK [generate keys] *********************************************************** 2026-02-15 03:56:20.889606 | orchestrator | Sunday 15 February 2026 03:55:27 +0000 (0:00:44.833) 0:01:17.585 ******* 2026-02-15 03:56:20.889628 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889641 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889652 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889663 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889737 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889750 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889762 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-02-15 03:56:20.889773 | orchestrator | 2026-02-15 03:56:20.889784 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-02-15 03:56:20.889796 | orchestrator | Sunday 15 February 2026 03:55:51 +0000 (0:00:23.495) 0:01:41.081 ******* 2026-02-15 03:56:20.889807 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889818 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889829 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889840 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889850 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889861 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889872 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 03:56:20.889883 | orchestrator | 2026-02-15 03:56:20.889894 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-02-15 03:56:20.889905 | orchestrator | Sunday 15 February 2026 03:56:03 +0000 (0:00:11.695) 0:01:52.777 ******* 2026-02-15 03:56:20.889917 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889927 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-15 03:56:20.889938 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 03:56:20.889949 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889960 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-15 03:56:20.889971 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 03:56:20.889982 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.889995 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-15 03:56:20.890007 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 03:56:20.890073 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.890171 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-15 03:56:20.890187 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 03:56:20.890201 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.890225 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-15 03:56:20.890236 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 03:56:20.890247 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 03:56:20.890258 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-15 03:56:20.890269 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 03:56:20.890280 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-02-15 03:56:20.890292 | orchestrator | 2026-02-15 03:56:20.890303 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:56:20.890314 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-02-15 03:56:20.890327 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-15 03:56:20.890349 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-15 03:56:20.890360 | orchestrator | 2026-02-15 03:56:20.890371 | orchestrator | 2026-02-15 03:56:20.890382 | orchestrator | 2026-02-15 03:56:20.890413 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:56:20.890425 | orchestrator | Sunday 15 February 2026 03:56:20 +0000 (0:00:17.411) 0:02:10.188 ******* 2026-02-15 03:56:20.890436 | orchestrator | =============================================================================== 2026-02-15 03:56:20.890448 | orchestrator | create openstack pool(s) ----------------------------------------------- 44.83s 2026-02-15 03:56:20.890459 | orchestrator | generate keys ---------------------------------------------------------- 23.50s 2026-02-15 03:56:20.890470 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.41s 2026-02-15 03:56:20.890481 | orchestrator | get keys from monitors ------------------------------------------------- 11.70s 2026-02-15 03:56:20.890492 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.26s 2026-02-15 03:56:20.890503 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.82s 2026-02-15 03:56:20.890514 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.77s 2026-02-15 03:56:20.890525 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.21s 2026-02-15 03:56:20.890536 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 1.13s 2026-02-15 03:56:20.890547 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.95s 2026-02-15 03:56:20.890558 | orchestrator | ceph-facts : Get current fsid ------------------------------------------- 0.93s 2026-02-15 03:56:20.890569 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.92s 2026-02-15 03:56:20.890580 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.92s 2026-02-15 03:56:20.890591 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.88s 2026-02-15 03:56:20.890602 | orchestrator | ceph-facts : Set_fact rgw_instances ------------------------------------- 0.83s 2026-02-15 03:56:20.890614 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.76s 2026-02-15 03:56:20.890625 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.72s 2026-02-15 03:56:20.890636 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.71s 2026-02-15 03:56:20.890647 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.71s 2026-02-15 03:56:20.890658 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.71s 2026-02-15 03:56:23.406597 | orchestrator | 2026-02-15 03:56:23 | INFO  | Task ba5ab845-8eed-4c3b-a5d3-e2218f7ed79e (copy-ceph-keys) was prepared for execution. 2026-02-15 03:56:23.406757 | orchestrator | 2026-02-15 03:56:23 | INFO  | It takes a moment until task ba5ab845-8eed-4c3b-a5d3-e2218f7ed79e (copy-ceph-keys) has been started and output is visible here. 2026-02-15 03:57:04.545295 | orchestrator | 2026-02-15 03:57:04.545375 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-02-15 03:57:04.545383 | orchestrator | 2026-02-15 03:57:04.545389 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-02-15 03:57:04.545395 | orchestrator | Sunday 15 February 2026 03:56:27 +0000 (0:00:00.203) 0:00:00.203 ******* 2026-02-15 03:57:04.545400 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-15 03:57:04.545406 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545411 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545431 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-15 03:57:04.545437 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545442 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-15 03:57:04.545457 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-15 03:57:04.545462 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-15 03:57:04.545467 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-15 03:57:04.545471 | orchestrator | 2026-02-15 03:57:04.545476 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-02-15 03:57:04.545481 | orchestrator | Sunday 15 February 2026 03:56:32 +0000 (0:00:04.784) 0:00:04.988 ******* 2026-02-15 03:57:04.545486 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-02-15 03:57:04.545490 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545495 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545500 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-02-15 03:57:04.545504 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545509 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-02-15 03:57:04.545514 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-02-15 03:57:04.545518 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-02-15 03:57:04.545523 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-02-15 03:57:04.545527 | orchestrator | 2026-02-15 03:57:04.545532 | orchestrator | TASK [Create share directory] ************************************************** 2026-02-15 03:57:04.545537 | orchestrator | Sunday 15 February 2026 03:56:37 +0000 (0:00:04.560) 0:00:09.548 ******* 2026-02-15 03:57:04.545542 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-15 03:57:04.545547 | orchestrator | 2026-02-15 03:57:04.545553 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-02-15 03:57:04.545557 | orchestrator | Sunday 15 February 2026 03:56:38 +0000 (0:00:01.299) 0:00:10.848 ******* 2026-02-15 03:57:04.545562 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-02-15 03:57:04.545568 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545572 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545577 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-02-15 03:57:04.545582 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545587 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-02-15 03:57:04.545591 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-02-15 03:57:04.545596 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-02-15 03:57:04.545601 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-02-15 03:57:04.545606 | orchestrator | 2026-02-15 03:57:04.545610 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-02-15 03:57:04.545615 | orchestrator | Sunday 15 February 2026 03:56:53 +0000 (0:00:14.580) 0:00:25.428 ******* 2026-02-15 03:57:04.545623 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-02-15 03:57:04.545628 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-02-15 03:57:04.545633 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-15 03:57:04.545638 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-02-15 03:57:04.545652 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-15 03:57:04.545657 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-02-15 03:57:04.545661 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-02-15 03:57:04.545666 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-02-15 03:57:04.545671 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-02-15 03:57:04.545722 | orchestrator | 2026-02-15 03:57:04.545728 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-02-15 03:57:04.545733 | orchestrator | Sunday 15 February 2026 03:56:56 +0000 (0:00:03.369) 0:00:28.798 ******* 2026-02-15 03:57:04.545738 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-02-15 03:57:04.545743 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545748 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545755 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-02-15 03:57:04.545760 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-02-15 03:57:04.545765 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-02-15 03:57:04.545770 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-02-15 03:57:04.545775 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-02-15 03:57:04.545779 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-02-15 03:57:04.545784 | orchestrator | 2026-02-15 03:57:04.545789 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:57:04.545793 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:57:04.545800 | orchestrator | 2026-02-15 03:57:04.545804 | orchestrator | 2026-02-15 03:57:04.545809 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:57:04.545813 | orchestrator | Sunday 15 February 2026 03:57:04 +0000 (0:00:07.594) 0:00:36.393 ******* 2026-02-15 03:57:04.545818 | orchestrator | =============================================================================== 2026-02-15 03:57:04.545823 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.58s 2026-02-15 03:57:04.545827 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.59s 2026-02-15 03:57:04.545832 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.78s 2026-02-15 03:57:04.545836 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.56s 2026-02-15 03:57:04.545841 | orchestrator | Check if target directories exist --------------------------------------- 3.37s 2026-02-15 03:57:04.545845 | orchestrator | Create share directory -------------------------------------------------- 1.30s 2026-02-15 03:57:17.058159 | orchestrator | 2026-02-15 03:57:17 | INFO  | Task f9447155-83c9-4cc6-a72e-261980a173d2 (cephclient) was prepared for execution. 2026-02-15 03:57:17.058277 | orchestrator | 2026-02-15 03:57:17 | INFO  | It takes a moment until task f9447155-83c9-4cc6-a72e-261980a173d2 (cephclient) has been started and output is visible here. 2026-02-15 03:58:20.801316 | orchestrator | 2026-02-15 03:58:20.801435 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-15 03:58:20.801455 | orchestrator | 2026-02-15 03:58:20.801473 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-15 03:58:20.801487 | orchestrator | Sunday 15 February 2026 03:57:21 +0000 (0:00:00.256) 0:00:00.256 ******* 2026-02-15 03:58:20.801505 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-15 03:58:20.801523 | orchestrator | 2026-02-15 03:58:20.801540 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-15 03:58:20.801556 | orchestrator | Sunday 15 February 2026 03:57:21 +0000 (0:00:00.252) 0:00:00.508 ******* 2026-02-15 03:58:20.801573 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-15 03:58:20.801590 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-15 03:58:20.801606 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-15 03:58:20.801623 | orchestrator | 2026-02-15 03:58:20.801640 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-15 03:58:20.801655 | orchestrator | Sunday 15 February 2026 03:57:23 +0000 (0:00:01.341) 0:00:01.849 ******* 2026-02-15 03:58:20.801671 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-15 03:58:20.801749 | orchestrator | 2026-02-15 03:58:20.801767 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-15 03:58:20.801783 | orchestrator | Sunday 15 February 2026 03:57:24 +0000 (0:00:01.611) 0:00:03.461 ******* 2026-02-15 03:58:20.801798 | orchestrator | changed: [testbed-manager] 2026-02-15 03:58:20.801815 | orchestrator | 2026-02-15 03:58:20.801831 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-15 03:58:20.801847 | orchestrator | Sunday 15 February 2026 03:57:25 +0000 (0:00:00.999) 0:00:04.461 ******* 2026-02-15 03:58:20.801864 | orchestrator | changed: [testbed-manager] 2026-02-15 03:58:20.801880 | orchestrator | 2026-02-15 03:58:20.801897 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-15 03:58:20.801914 | orchestrator | Sunday 15 February 2026 03:57:26 +0000 (0:00:00.982) 0:00:05.443 ******* 2026-02-15 03:58:20.801934 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-02-15 03:58:20.801950 | orchestrator | ok: [testbed-manager] 2026-02-15 03:58:20.801966 | orchestrator | 2026-02-15 03:58:20.801985 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-15 03:58:20.801999 | orchestrator | Sunday 15 February 2026 03:58:10 +0000 (0:00:43.197) 0:00:48.640 ******* 2026-02-15 03:58:20.802074 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-02-15 03:58:20.802101 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-02-15 03:58:20.802115 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-02-15 03:58:20.802128 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-02-15 03:58:20.802142 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-02-15 03:58:20.802156 | orchestrator | 2026-02-15 03:58:20.802170 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-15 03:58:20.802184 | orchestrator | Sunday 15 February 2026 03:58:14 +0000 (0:00:04.377) 0:00:53.018 ******* 2026-02-15 03:58:20.802197 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-15 03:58:20.802210 | orchestrator | 2026-02-15 03:58:20.802241 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-15 03:58:20.802254 | orchestrator | Sunday 15 February 2026 03:58:14 +0000 (0:00:00.515) 0:00:53.533 ******* 2026-02-15 03:58:20.802268 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:58:20.802283 | orchestrator | 2026-02-15 03:58:20.802395 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-15 03:58:20.802442 | orchestrator | Sunday 15 February 2026 03:58:15 +0000 (0:00:00.173) 0:00:53.707 ******* 2026-02-15 03:58:20.802455 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:58:20.802467 | orchestrator | 2026-02-15 03:58:20.802482 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-02-15 03:58:20.802495 | orchestrator | Sunday 15 February 2026 03:58:15 +0000 (0:00:00.632) 0:00:54.340 ******* 2026-02-15 03:58:20.802510 | orchestrator | changed: [testbed-manager] 2026-02-15 03:58:20.802524 | orchestrator | 2026-02-15 03:58:20.802536 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-02-15 03:58:20.802549 | orchestrator | Sunday 15 February 2026 03:58:17 +0000 (0:00:01.599) 0:00:55.940 ******* 2026-02-15 03:58:20.802579 | orchestrator | changed: [testbed-manager] 2026-02-15 03:58:20.802591 | orchestrator | 2026-02-15 03:58:20.802605 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-02-15 03:58:20.802618 | orchestrator | Sunday 15 February 2026 03:58:18 +0000 (0:00:00.789) 0:00:56.729 ******* 2026-02-15 03:58:20.802632 | orchestrator | changed: [testbed-manager] 2026-02-15 03:58:20.802647 | orchestrator | 2026-02-15 03:58:20.802662 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-02-15 03:58:20.802675 | orchestrator | Sunday 15 February 2026 03:58:18 +0000 (0:00:00.606) 0:00:57.336 ******* 2026-02-15 03:58:20.802715 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-15 03:58:20.802731 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-15 03:58:20.802745 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-15 03:58:20.802758 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-15 03:58:20.802772 | orchestrator | 2026-02-15 03:58:20.802786 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:58:20.802800 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 03:58:20.802815 | orchestrator | 2026-02-15 03:58:20.802829 | orchestrator | 2026-02-15 03:58:20.802865 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:58:20.802877 | orchestrator | Sunday 15 February 2026 03:58:20 +0000 (0:00:01.668) 0:00:59.004 ******* 2026-02-15 03:58:20.802890 | orchestrator | =============================================================================== 2026-02-15 03:58:20.802904 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 43.20s 2026-02-15 03:58:20.802918 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.38s 2026-02-15 03:58:20.802933 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.67s 2026-02-15 03:58:20.802947 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.61s 2026-02-15 03:58:20.802961 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.60s 2026-02-15 03:58:20.802975 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.34s 2026-02-15 03:58:20.802990 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.00s 2026-02-15 03:58:20.803005 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.98s 2026-02-15 03:58:20.803019 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.79s 2026-02-15 03:58:20.803033 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.63s 2026-02-15 03:58:20.803048 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.61s 2026-02-15 03:58:20.803060 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.52s 2026-02-15 03:58:20.803071 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-02-15 03:58:20.803081 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.17s 2026-02-15 03:58:23.385027 | orchestrator | 2026-02-15 03:58:23 | INFO  | Task de0f9eb7-a00d-4db4-b909-f3dfa74d2a41 (ceph-bootstrap-dashboard) was prepared for execution. 2026-02-15 03:58:23.385126 | orchestrator | 2026-02-15 03:58:23 | INFO  | It takes a moment until task de0f9eb7-a00d-4db4-b909-f3dfa74d2a41 (ceph-bootstrap-dashboard) has been started and output is visible here. 2026-02-15 03:59:45.180435 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-15 03:59:45.180546 | orchestrator | 2.16.14 2026-02-15 03:59:45.180561 | orchestrator | 2026-02-15 03:59:45.180572 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-02-15 03:59:45.180582 | orchestrator | 2026-02-15 03:59:45.180591 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-02-15 03:59:45.180601 | orchestrator | Sunday 15 February 2026 03:58:28 +0000 (0:00:00.294) 0:00:00.294 ******* 2026-02-15 03:59:45.180610 | orchestrator | changed: [testbed-manager] 2026-02-15 03:59:45.180620 | orchestrator | 2026-02-15 03:59:45.180629 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-02-15 03:59:45.180638 | orchestrator | Sunday 15 February 2026 03:58:30 +0000 (0:00:01.694) 0:00:01.989 ******* 2026-02-15 03:59:45.180647 | orchestrator | changed: [testbed-manager] 2026-02-15 03:59:45.180656 | orchestrator | 2026-02-15 03:59:45.180665 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-02-15 03:59:45.180673 | orchestrator | Sunday 15 February 2026 03:58:31 +0000 (0:00:01.100) 0:00:03.090 ******* 2026-02-15 03:59:45.180781 | orchestrator | changed: [testbed-manager] 2026-02-15 03:59:45.180797 | orchestrator | 2026-02-15 03:59:45.180830 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-02-15 03:59:45.180845 | orchestrator | Sunday 15 February 2026 03:58:32 +0000 (0:00:01.143) 0:00:04.233 ******* 2026-02-15 03:59:45.180858 | orchestrator | changed: [testbed-manager] 2026-02-15 03:59:45.180872 | orchestrator | 2026-02-15 03:59:45.180885 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-02-15 03:59:45.180900 | orchestrator | Sunday 15 February 2026 03:58:33 +0000 (0:00:01.290) 0:00:05.523 ******* 2026-02-15 03:59:45.180915 | orchestrator | changed: [testbed-manager] 2026-02-15 03:59:45.180930 | orchestrator | 2026-02-15 03:59:45.180945 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-02-15 03:59:45.180961 | orchestrator | Sunday 15 February 2026 03:58:34 +0000 (0:00:01.168) 0:00:06.692 ******* 2026-02-15 03:59:45.180977 | orchestrator | changed: [testbed-manager] 2026-02-15 03:59:45.180992 | orchestrator | 2026-02-15 03:59:45.181007 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-02-15 03:59:45.181022 | orchestrator | Sunday 15 February 2026 03:58:35 +0000 (0:00:01.094) 0:00:07.786 ******* 2026-02-15 03:59:45.181037 | orchestrator | changed: [testbed-manager] 2026-02-15 03:59:45.181053 | orchestrator | 2026-02-15 03:59:45.181068 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-02-15 03:59:45.181085 | orchestrator | Sunday 15 February 2026 03:58:37 +0000 (0:00:02.131) 0:00:09.918 ******* 2026-02-15 03:59:45.181102 | orchestrator | changed: [testbed-manager] 2026-02-15 03:59:45.181117 | orchestrator | 2026-02-15 03:59:45.181132 | orchestrator | TASK [Create admin user] ******************************************************* 2026-02-15 03:59:45.181146 | orchestrator | Sunday 15 February 2026 03:58:39 +0000 (0:00:01.201) 0:00:11.119 ******* 2026-02-15 03:59:45.181157 | orchestrator | changed: [testbed-manager] 2026-02-15 03:59:45.181167 | orchestrator | 2026-02-15 03:59:45.181178 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-02-15 03:59:45.181188 | orchestrator | Sunday 15 February 2026 03:59:20 +0000 (0:00:41.008) 0:00:52.128 ******* 2026-02-15 03:59:45.181199 | orchestrator | skipping: [testbed-manager] 2026-02-15 03:59:45.181209 | orchestrator | 2026-02-15 03:59:45.181219 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-15 03:59:45.181230 | orchestrator | 2026-02-15 03:59:45.181240 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-15 03:59:45.181250 | orchestrator | Sunday 15 February 2026 03:59:20 +0000 (0:00:00.178) 0:00:52.306 ******* 2026-02-15 03:59:45.181283 | orchestrator | changed: [testbed-node-0] 2026-02-15 03:59:45.181293 | orchestrator | 2026-02-15 03:59:45.181304 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-15 03:59:45.181313 | orchestrator | 2026-02-15 03:59:45.181324 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-15 03:59:45.181334 | orchestrator | Sunday 15 February 2026 03:59:32 +0000 (0:00:11.986) 0:01:04.293 ******* 2026-02-15 03:59:45.181344 | orchestrator | changed: [testbed-node-1] 2026-02-15 03:59:45.181353 | orchestrator | 2026-02-15 03:59:45.181362 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-02-15 03:59:45.181371 | orchestrator | 2026-02-15 03:59:45.181379 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-02-15 03:59:45.181388 | orchestrator | Sunday 15 February 2026 03:59:43 +0000 (0:00:11.137) 0:01:15.431 ******* 2026-02-15 03:59:45.181397 | orchestrator | changed: [testbed-node-2] 2026-02-15 03:59:45.181406 | orchestrator | 2026-02-15 03:59:45.181414 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 03:59:45.181424 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 03:59:45.181435 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:59:45.181445 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:59:45.181453 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 03:59:45.181462 | orchestrator | 2026-02-15 03:59:45.181471 | orchestrator | 2026-02-15 03:59:45.181479 | orchestrator | 2026-02-15 03:59:45.181489 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 03:59:45.181497 | orchestrator | Sunday 15 February 2026 03:59:44 +0000 (0:00:01.209) 0:01:16.641 ******* 2026-02-15 03:59:45.181506 | orchestrator | =============================================================================== 2026-02-15 03:59:45.181515 | orchestrator | Create admin user ------------------------------------------------------ 41.01s 2026-02-15 03:59:45.181543 | orchestrator | Restart ceph manager service ------------------------------------------- 24.33s 2026-02-15 03:59:45.181552 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.13s 2026-02-15 03:59:45.181561 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.70s 2026-02-15 03:59:45.181569 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.29s 2026-02-15 03:59:45.181578 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.20s 2026-02-15 03:59:45.181587 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.17s 2026-02-15 03:59:45.181595 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.14s 2026-02-15 03:59:45.181604 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.10s 2026-02-15 03:59:45.181612 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.09s 2026-02-15 03:59:45.181621 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.18s 2026-02-15 03:59:45.579533 | orchestrator | + sh -c /opt/configuration/scripts/deploy/300-openstack.sh 2026-02-15 03:59:47.719201 | orchestrator | 2026-02-15 03:59:47 | INFO  | Task a647835c-d65d-47bc-b930-c4197d882814 (keystone) was prepared for execution. 2026-02-15 03:59:47.719354 | orchestrator | 2026-02-15 03:59:47 | INFO  | It takes a moment until task a647835c-d65d-47bc-b930-c4197d882814 (keystone) has been started and output is visible here. 2026-02-15 03:59:55.497018 | orchestrator | 2026-02-15 03:59:55.497122 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 03:59:55.497152 | orchestrator | 2026-02-15 03:59:55.497160 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 03:59:55.497167 | orchestrator | Sunday 15 February 2026 03:59:52 +0000 (0:00:00.283) 0:00:00.283 ******* 2026-02-15 03:59:55.497174 | orchestrator | ok: [testbed-node-0] 2026-02-15 03:59:55.497182 | orchestrator | ok: [testbed-node-1] 2026-02-15 03:59:55.497188 | orchestrator | ok: [testbed-node-2] 2026-02-15 03:59:55.497194 | orchestrator | 2026-02-15 03:59:55.497200 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 03:59:55.497207 | orchestrator | Sunday 15 February 2026 03:59:52 +0000 (0:00:00.343) 0:00:00.626 ******* 2026-02-15 03:59:55.497213 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-02-15 03:59:55.497220 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-02-15 03:59:55.497227 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-02-15 03:59:55.497233 | orchestrator | 2026-02-15 03:59:55.497239 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-02-15 03:59:55.497246 | orchestrator | 2026-02-15 03:59:55.497252 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-15 03:59:55.497258 | orchestrator | Sunday 15 February 2026 03:59:53 +0000 (0:00:00.522) 0:00:01.149 ******* 2026-02-15 03:59:55.497265 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 03:59:55.497272 | orchestrator | 2026-02-15 03:59:55.497278 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-02-15 03:59:55.497285 | orchestrator | Sunday 15 February 2026 03:59:53 +0000 (0:00:00.685) 0:00:01.835 ******* 2026-02-15 03:59:55.497296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 03:59:55.497306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 03:59:55.497338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 03:59:55.497353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 03:59:55.497361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 03:59:55.497367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 03:59:55.497374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 03:59:55.497381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 03:59:55.497395 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 03:59:55.497402 | orchestrator | 2026-02-15 03:59:55.497408 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-02-15 03:59:55.497419 | orchestrator | Sunday 15 February 2026 03:59:55 +0000 (0:00:01.770) 0:00:03.605 ******* 2026-02-15 04:00:01.450602 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:00:01.450823 | orchestrator | 2026-02-15 04:00:01.450854 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-02-15 04:00:01.450876 | orchestrator | Sunday 15 February 2026 03:59:55 +0000 (0:00:00.329) 0:00:03.934 ******* 2026-02-15 04:00:01.450894 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:00:01.450915 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:00:01.450933 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:00:01.450952 | orchestrator | 2026-02-15 04:00:01.450968 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-02-15 04:00:01.450979 | orchestrator | Sunday 15 February 2026 03:59:56 +0000 (0:00:00.325) 0:00:04.260 ******* 2026-02-15 04:00:01.450991 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:00:01.451001 | orchestrator | 2026-02-15 04:00:01.451013 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-15 04:00:01.451024 | orchestrator | Sunday 15 February 2026 03:59:57 +0000 (0:00:00.877) 0:00:05.138 ******* 2026-02-15 04:00:01.451036 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:00:01.451047 | orchestrator | 2026-02-15 04:00:01.451058 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-02-15 04:00:01.451069 | orchestrator | Sunday 15 February 2026 03:59:57 +0000 (0:00:00.621) 0:00:05.760 ******* 2026-02-15 04:00:01.451086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:00:01.451102 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:00:01.451154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:00:01.451189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 04:00:01.451204 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 04:00:01.451215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 04:00:01.451227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:00:01.451251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:00:01.451272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:00:01.451290 | orchestrator | 2026-02-15 04:00:01.451308 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-02-15 04:00:01.451326 | orchestrator | Sunday 15 February 2026 04:00:00 +0000 (0:00:03.170) 0:00:08.930 ******* 2026-02-15 04:00:01.451360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 04:00:02.361270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:02.361403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 04:00:02.361416 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:00:02.361427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 04:00:02.361455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:02.361463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 04:00:02.361469 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:00:02.361491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 04:00:02.361499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:02.361511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 04:00:02.361519 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:00:02.361525 | orchestrator | 2026-02-15 04:00:02.361533 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-02-15 04:00:02.361542 | orchestrator | Sunday 15 February 2026 04:00:01 +0000 (0:00:00.632) 0:00:09.563 ******* 2026-02-15 04:00:02.361552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 04:00:02.361560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:02.361572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 04:00:06.182272 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:00:06.182359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 04:00:06.182387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:06.182395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 04:00:06.182402 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:00:06.182418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 04:00:06.182425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:06.182442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 04:00:06.182453 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:00:06.182459 | orchestrator | 2026-02-15 04:00:06.182465 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-02-15 04:00:06.182472 | orchestrator | Sunday 15 February 2026 04:00:02 +0000 (0:00:00.910) 0:00:10.474 ******* 2026-02-15 04:00:06.182478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:00:06.182488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:00:06.182494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:00:06.182506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 04:00:11.096448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 04:00:11.096547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 04:00:11.096565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:00:11.096596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:00:11.096611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:00:11.096624 | orchestrator | 2026-02-15 04:00:11.096640 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-02-15 04:00:11.096654 | orchestrator | Sunday 15 February 2026 04:00:06 +0000 (0:00:03.813) 0:00:14.288 ******* 2026-02-15 04:00:11.096754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:00:11.096785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:11.096794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:00:11.096808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:11.096843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:00:11.096860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:15.165297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:00:15.165387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:00:15.165399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:00:15.165409 | orchestrator | 2026-02-15 04:00:15.165420 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-02-15 04:00:15.165430 | orchestrator | Sunday 15 February 2026 04:00:11 +0000 (0:00:04.920) 0:00:19.209 ******* 2026-02-15 04:00:15.165439 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:00:15.165448 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:00:15.165456 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:00:15.165464 | orchestrator | 2026-02-15 04:00:15.165488 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-02-15 04:00:15.165496 | orchestrator | Sunday 15 February 2026 04:00:12 +0000 (0:00:01.506) 0:00:20.716 ******* 2026-02-15 04:00:15.165504 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:00:15.165513 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:00:15.165521 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:00:15.165528 | orchestrator | 2026-02-15 04:00:15.165537 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-02-15 04:00:15.165545 | orchestrator | Sunday 15 February 2026 04:00:13 +0000 (0:00:00.975) 0:00:21.692 ******* 2026-02-15 04:00:15.165553 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:00:15.165561 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:00:15.165569 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:00:15.165576 | orchestrator | 2026-02-15 04:00:15.165585 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-02-15 04:00:15.165612 | orchestrator | Sunday 15 February 2026 04:00:14 +0000 (0:00:00.588) 0:00:22.281 ******* 2026-02-15 04:00:15.165621 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:00:15.165629 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:00:15.165637 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:00:15.165645 | orchestrator | 2026-02-15 04:00:15.165653 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-02-15 04:00:15.165661 | orchestrator | Sunday 15 February 2026 04:00:14 +0000 (0:00:00.312) 0:00:22.593 ******* 2026-02-15 04:00:15.165730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 04:00:15.165744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:15.165754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 04:00:15.165767 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:00:15.165781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 04:00:15.165790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:15.165807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 04:00:15.165815 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:00:15.165831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-02-15 04:00:35.035030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 04:00:35.035131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 04:00:35.035143 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:00:35.035153 | orchestrator | 2026-02-15 04:00:35.035162 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-15 04:00:35.035170 | orchestrator | Sunday 15 February 2026 04:00:15 +0000 (0:00:00.680) 0:00:23.273 ******* 2026-02-15 04:00:35.035178 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:00:35.035202 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:00:35.035221 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:00:35.035228 | orchestrator | 2026-02-15 04:00:35.035235 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-02-15 04:00:35.035242 | orchestrator | Sunday 15 February 2026 04:00:15 +0000 (0:00:00.299) 0:00:23.573 ******* 2026-02-15 04:00:35.035249 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-15 04:00:35.035258 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-15 04:00:35.035265 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-02-15 04:00:35.035272 | orchestrator | 2026-02-15 04:00:35.035283 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-02-15 04:00:35.035295 | orchestrator | Sunday 15 February 2026 04:00:17 +0000 (0:00:01.987) 0:00:25.561 ******* 2026-02-15 04:00:35.035302 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:00:35.035309 | orchestrator | 2026-02-15 04:00:35.035316 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-02-15 04:00:35.035323 | orchestrator | Sunday 15 February 2026 04:00:18 +0000 (0:00:01.068) 0:00:26.629 ******* 2026-02-15 04:00:35.035329 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:00:35.035336 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:00:35.035343 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:00:35.035350 | orchestrator | 2026-02-15 04:00:35.035356 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-02-15 04:00:35.035363 | orchestrator | Sunday 15 February 2026 04:00:19 +0000 (0:00:00.593) 0:00:27.223 ******* 2026-02-15 04:00:35.035370 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:00:35.035377 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-15 04:00:35.035383 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-15 04:00:35.035390 | orchestrator | 2026-02-15 04:00:35.035397 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-02-15 04:00:35.035404 | orchestrator | Sunday 15 February 2026 04:00:20 +0000 (0:00:01.076) 0:00:28.300 ******* 2026-02-15 04:00:35.035411 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:00:35.035419 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:00:35.035426 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:00:35.035433 | orchestrator | 2026-02-15 04:00:35.035440 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-02-15 04:00:35.035446 | orchestrator | Sunday 15 February 2026 04:00:20 +0000 (0:00:00.590) 0:00:28.891 ******* 2026-02-15 04:00:35.035453 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-15 04:00:35.035487 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-15 04:00:35.035495 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-02-15 04:00:35.035502 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-15 04:00:35.035509 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-15 04:00:35.035516 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-02-15 04:00:35.035522 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-15 04:00:35.035530 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-15 04:00:35.035549 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-02-15 04:00:35.035556 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-15 04:00:35.035563 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-15 04:00:35.035576 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-02-15 04:00:35.035584 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-15 04:00:35.035593 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-15 04:00:35.035601 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-02-15 04:00:35.035609 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-15 04:00:35.035617 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-15 04:00:35.035625 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-15 04:00:35.035633 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-15 04:00:35.035640 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-15 04:00:35.035648 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-15 04:00:35.035656 | orchestrator | 2026-02-15 04:00:35.035665 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-02-15 04:00:35.035673 | orchestrator | Sunday 15 February 2026 04:00:29 +0000 (0:00:08.947) 0:00:37.839 ******* 2026-02-15 04:00:35.035731 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-15 04:00:35.035745 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-15 04:00:35.035753 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-15 04:00:35.035761 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-15 04:00:35.035770 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-15 04:00:35.035778 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-15 04:00:35.035786 | orchestrator | 2026-02-15 04:00:35.035795 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-02-15 04:00:35.035803 | orchestrator | Sunday 15 February 2026 04:00:32 +0000 (0:00:02.740) 0:00:40.579 ******* 2026-02-15 04:00:35.035814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:00:35.035832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:02:16.026816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-02-15 04:02:16.026959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 04:02:16.026976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 04:02:16.026986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-02-15 04:02:16.026995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:02:16.027039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:02:16.027049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-02-15 04:02:16.027058 | orchestrator | 2026-02-15 04:02:16.027068 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-15 04:02:16.027078 | orchestrator | Sunday 15 February 2026 04:00:35 +0000 (0:00:02.563) 0:00:43.142 ******* 2026-02-15 04:02:16.027086 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:02:16.027096 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:02:16.027104 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:02:16.027112 | orchestrator | 2026-02-15 04:02:16.027121 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-02-15 04:02:16.027129 | orchestrator | Sunday 15 February 2026 04:00:35 +0000 (0:00:00.557) 0:00:43.700 ******* 2026-02-15 04:02:16.027137 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:02:16.027146 | orchestrator | 2026-02-15 04:02:16.027154 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-02-15 04:02:16.027162 | orchestrator | Sunday 15 February 2026 04:00:38 +0000 (0:00:02.602) 0:00:46.302 ******* 2026-02-15 04:02:16.027171 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:02:16.027179 | orchestrator | 2026-02-15 04:02:16.027191 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-02-15 04:02:16.027200 | orchestrator | Sunday 15 February 2026 04:00:40 +0000 (0:00:02.500) 0:00:48.802 ******* 2026-02-15 04:02:16.027208 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:02:16.027220 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:02:16.027233 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:02:16.027246 | orchestrator | 2026-02-15 04:02:16.027258 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-02-15 04:02:16.027270 | orchestrator | Sunday 15 February 2026 04:00:41 +0000 (0:00:00.832) 0:00:49.635 ******* 2026-02-15 04:02:16.027281 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:02:16.027296 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:02:16.027311 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:02:16.027325 | orchestrator | 2026-02-15 04:02:16.027340 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-02-15 04:02:16.027356 | orchestrator | Sunday 15 February 2026 04:00:41 +0000 (0:00:00.328) 0:00:49.963 ******* 2026-02-15 04:02:16.027368 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:02:16.027377 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:02:16.027387 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:02:16.027396 | orchestrator | 2026-02-15 04:02:16.027406 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-02-15 04:02:16.027416 | orchestrator | Sunday 15 February 2026 04:00:42 +0000 (0:00:00.594) 0:00:50.558 ******* 2026-02-15 04:02:16.027432 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:02:16.027442 | orchestrator | 2026-02-15 04:02:16.027452 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-02-15 04:02:16.027462 | orchestrator | Sunday 15 February 2026 04:00:57 +0000 (0:00:14.867) 0:01:05.426 ******* 2026-02-15 04:02:16.027472 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:02:16.027482 | orchestrator | 2026-02-15 04:02:16.027491 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-15 04:02:16.027501 | orchestrator | Sunday 15 February 2026 04:01:07 +0000 (0:00:10.390) 0:01:15.816 ******* 2026-02-15 04:02:16.027511 | orchestrator | 2026-02-15 04:02:16.027521 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-15 04:02:16.027530 | orchestrator | Sunday 15 February 2026 04:01:07 +0000 (0:00:00.071) 0:01:15.887 ******* 2026-02-15 04:02:16.027540 | orchestrator | 2026-02-15 04:02:16.027549 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-02-15 04:02:16.027558 | orchestrator | Sunday 15 February 2026 04:01:07 +0000 (0:00:00.087) 0:01:15.975 ******* 2026-02-15 04:02:16.027568 | orchestrator | 2026-02-15 04:02:16.027578 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-02-15 04:02:16.027587 | orchestrator | Sunday 15 February 2026 04:01:07 +0000 (0:00:00.076) 0:01:16.052 ******* 2026-02-15 04:02:16.027597 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:02:16.027607 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:02:16.027616 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:02:16.027626 | orchestrator | 2026-02-15 04:02:16.027636 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-02-15 04:02:16.027646 | orchestrator | Sunday 15 February 2026 04:02:00 +0000 (0:00:52.484) 0:02:08.536 ******* 2026-02-15 04:02:16.027655 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:02:16.027663 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:02:16.027671 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:02:16.027679 | orchestrator | 2026-02-15 04:02:16.027688 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-02-15 04:02:16.027716 | orchestrator | Sunday 15 February 2026 04:02:08 +0000 (0:00:08.025) 0:02:16.561 ******* 2026-02-15 04:02:16.027726 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:02:16.027734 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:02:16.027742 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:02:16.027750 | orchestrator | 2026-02-15 04:02:16.027758 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-15 04:02:16.027766 | orchestrator | Sunday 15 February 2026 04:02:15 +0000 (0:00:06.953) 0:02:23.515 ******* 2026-02-15 04:02:16.027780 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:03:08.364627 | orchestrator | 2026-02-15 04:03:08.364858 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-02-15 04:03:08.364882 | orchestrator | Sunday 15 February 2026 04:02:16 +0000 (0:00:00.627) 0:02:24.142 ******* 2026-02-15 04:03:08.364896 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:03:08.364909 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:03:08.364920 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:03:08.364932 | orchestrator | 2026-02-15 04:03:08.364943 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-02-15 04:03:08.364955 | orchestrator | Sunday 15 February 2026 04:02:17 +0000 (0:00:01.238) 0:02:25.380 ******* 2026-02-15 04:03:08.364966 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:03:08.364978 | orchestrator | 2026-02-15 04:03:08.364989 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-02-15 04:03:08.365000 | orchestrator | Sunday 15 February 2026 04:02:18 +0000 (0:00:01.735) 0:02:27.116 ******* 2026-02-15 04:03:08.365012 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-02-15 04:03:08.365023 | orchestrator | 2026-02-15 04:03:08.365034 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-02-15 04:03:08.365071 | orchestrator | Sunday 15 February 2026 04:02:30 +0000 (0:00:11.932) 0:02:39.048 ******* 2026-02-15 04:03:08.365083 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-02-15 04:03:08.365095 | orchestrator | 2026-02-15 04:03:08.365106 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-02-15 04:03:08.365117 | orchestrator | Sunday 15 February 2026 04:02:55 +0000 (0:00:25.014) 0:03:04.062 ******* 2026-02-15 04:03:08.365128 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-02-15 04:03:08.365140 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-02-15 04:03:08.365151 | orchestrator | 2026-02-15 04:03:08.365162 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-02-15 04:03:08.365188 | orchestrator | Sunday 15 February 2026 04:03:03 +0000 (0:00:07.167) 0:03:11.230 ******* 2026-02-15 04:03:08.365199 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:08.365210 | orchestrator | 2026-02-15 04:03:08.365222 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-02-15 04:03:08.365232 | orchestrator | Sunday 15 February 2026 04:03:03 +0000 (0:00:00.138) 0:03:11.368 ******* 2026-02-15 04:03:08.365244 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:08.365255 | orchestrator | 2026-02-15 04:03:08.365266 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-02-15 04:03:08.365277 | orchestrator | Sunday 15 February 2026 04:03:03 +0000 (0:00:00.148) 0:03:11.517 ******* 2026-02-15 04:03:08.365288 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:08.365299 | orchestrator | 2026-02-15 04:03:08.365310 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-02-15 04:03:08.365321 | orchestrator | Sunday 15 February 2026 04:03:03 +0000 (0:00:00.150) 0:03:11.667 ******* 2026-02-15 04:03:08.365332 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:08.365343 | orchestrator | 2026-02-15 04:03:08.365354 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-02-15 04:03:08.365365 | orchestrator | Sunday 15 February 2026 04:03:04 +0000 (0:00:00.584) 0:03:12.252 ******* 2026-02-15 04:03:08.365376 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:03:08.365387 | orchestrator | 2026-02-15 04:03:08.365398 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-02-15 04:03:08.365409 | orchestrator | Sunday 15 February 2026 04:03:07 +0000 (0:00:03.274) 0:03:15.527 ******* 2026-02-15 04:03:08.365420 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:08.365431 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:03:08.365442 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:03:08.365453 | orchestrator | 2026-02-15 04:03:08.365464 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:03:08.365477 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-15 04:03:08.365490 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-15 04:03:08.365502 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-15 04:03:08.365513 | orchestrator | 2026-02-15 04:03:08.365524 | orchestrator | 2026-02-15 04:03:08.365535 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:03:08.365547 | orchestrator | Sunday 15 February 2026 04:03:07 +0000 (0:00:00.495) 0:03:16.023 ******* 2026-02-15 04:03:08.365557 | orchestrator | =============================================================================== 2026-02-15 04:03:08.365568 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 52.48s 2026-02-15 04:03:08.365580 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.01s 2026-02-15 04:03:08.365598 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.87s 2026-02-15 04:03:08.365609 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.93s 2026-02-15 04:03:08.365620 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.39s 2026-02-15 04:03:08.365632 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.95s 2026-02-15 04:03:08.365643 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 8.03s 2026-02-15 04:03:08.365654 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.17s 2026-02-15 04:03:08.365665 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.95s 2026-02-15 04:03:08.365694 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.92s 2026-02-15 04:03:08.365734 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.81s 2026-02-15 04:03:08.365754 | orchestrator | keystone : Creating default user role ----------------------------------- 3.27s 2026-02-15 04:03:08.365774 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.17s 2026-02-15 04:03:08.365794 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.74s 2026-02-15 04:03:08.365813 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.60s 2026-02-15 04:03:08.365831 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.56s 2026-02-15 04:03:08.365843 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.50s 2026-02-15 04:03:08.365855 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.99s 2026-02-15 04:03:08.365866 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.77s 2026-02-15 04:03:08.365877 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.74s 2026-02-15 04:03:10.879252 | orchestrator | 2026-02-15 04:03:10 | INFO  | Task dc2dfc31-b4de-4867-82c9-02e1bb583b4c (placement) was prepared for execution. 2026-02-15 04:03:10.879384 | orchestrator | 2026-02-15 04:03:10 | INFO  | It takes a moment until task dc2dfc31-b4de-4867-82c9-02e1bb583b4c (placement) has been started and output is visible here. 2026-02-15 04:03:47.737156 | orchestrator | 2026-02-15 04:03:47.737285 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:03:47.737317 | orchestrator | 2026-02-15 04:03:47.737343 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:03:47.737380 | orchestrator | Sunday 15 February 2026 04:03:15 +0000 (0:00:00.283) 0:00:00.283 ******* 2026-02-15 04:03:47.737400 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:03:47.737419 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:03:47.737436 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:03:47.737455 | orchestrator | 2026-02-15 04:03:47.737474 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:03:47.737492 | orchestrator | Sunday 15 February 2026 04:03:15 +0000 (0:00:00.328) 0:00:00.612 ******* 2026-02-15 04:03:47.737512 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-02-15 04:03:47.737569 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-02-15 04:03:47.737586 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-02-15 04:03:47.737597 | orchestrator | 2026-02-15 04:03:47.737609 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-02-15 04:03:47.737619 | orchestrator | 2026-02-15 04:03:47.737630 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-15 04:03:47.737641 | orchestrator | Sunday 15 February 2026 04:03:16 +0000 (0:00:00.490) 0:00:01.102 ******* 2026-02-15 04:03:47.737653 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:03:47.737665 | orchestrator | 2026-02-15 04:03:47.737676 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-02-15 04:03:47.737709 | orchestrator | Sunday 15 February 2026 04:03:16 +0000 (0:00:00.579) 0:00:01.681 ******* 2026-02-15 04:03:47.737758 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-02-15 04:03:47.737771 | orchestrator | 2026-02-15 04:03:47.737785 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-02-15 04:03:47.737798 | orchestrator | Sunday 15 February 2026 04:03:20 +0000 (0:00:03.732) 0:00:05.414 ******* 2026-02-15 04:03:47.737812 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-02-15 04:03:47.737824 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-02-15 04:03:47.737835 | orchestrator | 2026-02-15 04:03:47.737846 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-02-15 04:03:47.737857 | orchestrator | Sunday 15 February 2026 04:03:27 +0000 (0:00:06.825) 0:00:12.239 ******* 2026-02-15 04:03:47.737868 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-02-15 04:03:47.737879 | orchestrator | 2026-02-15 04:03:47.737890 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-02-15 04:03:47.737901 | orchestrator | Sunday 15 February 2026 04:03:31 +0000 (0:00:03.816) 0:00:16.055 ******* 2026-02-15 04:03:47.737912 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:03:47.737923 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-02-15 04:03:47.737933 | orchestrator | 2026-02-15 04:03:47.737944 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-02-15 04:03:47.737955 | orchestrator | Sunday 15 February 2026 04:03:35 +0000 (0:00:04.265) 0:00:20.321 ******* 2026-02-15 04:03:47.737966 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:03:47.737977 | orchestrator | 2026-02-15 04:03:47.737988 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-02-15 04:03:47.737999 | orchestrator | Sunday 15 February 2026 04:03:38 +0000 (0:00:03.474) 0:00:23.795 ******* 2026-02-15 04:03:47.738010 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-02-15 04:03:47.738080 | orchestrator | 2026-02-15 04:03:47.738092 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-15 04:03:47.738103 | orchestrator | Sunday 15 February 2026 04:03:43 +0000 (0:00:04.360) 0:00:28.155 ******* 2026-02-15 04:03:47.738114 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:47.738125 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:03:47.738136 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:03:47.738147 | orchestrator | 2026-02-15 04:03:47.738158 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-02-15 04:03:47.738169 | orchestrator | Sunday 15 February 2026 04:03:43 +0000 (0:00:00.320) 0:00:28.476 ******* 2026-02-15 04:03:47.738184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:47.738231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:47.738254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:47.738266 | orchestrator | 2026-02-15 04:03:47.738277 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-02-15 04:03:47.738289 | orchestrator | Sunday 15 February 2026 04:03:44 +0000 (0:00:01.137) 0:00:29.613 ******* 2026-02-15 04:03:47.738300 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:47.738311 | orchestrator | 2026-02-15 04:03:47.738322 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-02-15 04:03:47.738333 | orchestrator | Sunday 15 February 2026 04:03:45 +0000 (0:00:00.382) 0:00:29.995 ******* 2026-02-15 04:03:47.738344 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:47.738355 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:03:47.738366 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:03:47.738377 | orchestrator | 2026-02-15 04:03:47.738388 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-02-15 04:03:47.738399 | orchestrator | Sunday 15 February 2026 04:03:45 +0000 (0:00:00.336) 0:00:30.332 ******* 2026-02-15 04:03:47.738411 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:03:47.738422 | orchestrator | 2026-02-15 04:03:47.738433 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-02-15 04:03:47.738444 | orchestrator | Sunday 15 February 2026 04:03:45 +0000 (0:00:00.595) 0:00:30.927 ******* 2026-02-15 04:03:47.738456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:47.738481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:50.746344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:50.746449 | orchestrator | 2026-02-15 04:03:50.746464 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-02-15 04:03:50.746476 | orchestrator | Sunday 15 February 2026 04:03:47 +0000 (0:00:01.738) 0:00:32.666 ******* 2026-02-15 04:03:50.746488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 04:03:50.746498 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:50.746511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 04:03:50.746522 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:03:50.746568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 04:03:50.746580 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:03:50.746591 | orchestrator | 2026-02-15 04:03:50.746602 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-02-15 04:03:50.746627 | orchestrator | Sunday 15 February 2026 04:03:48 +0000 (0:00:00.576) 0:00:33.242 ******* 2026-02-15 04:03:50.746638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 04:03:50.746648 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:50.746658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 04:03:50.746669 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:03:50.746679 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 04:03:50.746696 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:03:50.746706 | orchestrator | 2026-02-15 04:03:50.746883 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-02-15 04:03:50.746922 | orchestrator | Sunday 15 February 2026 04:03:49 +0000 (0:00:00.735) 0:00:33.978 ******* 2026-02-15 04:03:50.746944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:50.746971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:58.170383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:58.170500 | orchestrator | 2026-02-15 04:03:58.170517 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-02-15 04:03:58.170531 | orchestrator | Sunday 15 February 2026 04:03:50 +0000 (0:00:01.705) 0:00:35.683 ******* 2026-02-15 04:03:58.170544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:58.170576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:58.170603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:03:58.170615 | orchestrator | 2026-02-15 04:03:58.170626 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-02-15 04:03:58.170638 | orchestrator | Sunday 15 February 2026 04:03:53 +0000 (0:00:02.442) 0:00:38.126 ******* 2026-02-15 04:03:58.170666 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-15 04:03:58.170679 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-15 04:03:58.170690 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-02-15 04:03:58.170701 | orchestrator | 2026-02-15 04:03:58.170712 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-02-15 04:03:58.170779 | orchestrator | Sunday 15 February 2026 04:03:54 +0000 (0:00:01.527) 0:00:39.654 ******* 2026-02-15 04:03:58.170791 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:03:58.170803 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:03:58.170814 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:03:58.170825 | orchestrator | 2026-02-15 04:03:58.170836 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-02-15 04:03:58.170846 | orchestrator | Sunday 15 February 2026 04:03:56 +0000 (0:00:01.409) 0:00:41.063 ******* 2026-02-15 04:03:58.170858 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 04:03:58.170881 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:03:58.170893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 04:03:58.170904 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:03:58.170924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-02-15 04:03:58.170938 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:03:58.170951 | orchestrator | 2026-02-15 04:03:58.170964 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-02-15 04:03:58.170976 | orchestrator | Sunday 15 February 2026 04:03:56 +0000 (0:00:00.794) 0:00:41.858 ******* 2026-02-15 04:03:58.171000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:04:28.620115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:04:28.620224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-02-15 04:04:28.620232 | orchestrator | 2026-02-15 04:04:28.620239 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-02-15 04:04:28.620246 | orchestrator | Sunday 15 February 2026 04:03:58 +0000 (0:00:01.251) 0:00:43.109 ******* 2026-02-15 04:04:28.620252 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:04:28.620259 | orchestrator | 2026-02-15 04:04:28.620275 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-02-15 04:04:28.620284 | orchestrator | Sunday 15 February 2026 04:04:00 +0000 (0:00:02.278) 0:00:45.387 ******* 2026-02-15 04:04:28.620304 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:04:28.620314 | orchestrator | 2026-02-15 04:04:28.620322 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-02-15 04:04:28.620330 | orchestrator | Sunday 15 February 2026 04:04:02 +0000 (0:00:02.396) 0:00:47.784 ******* 2026-02-15 04:04:28.620338 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:04:28.620346 | orchestrator | 2026-02-15 04:04:28.620354 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-15 04:04:28.620377 | orchestrator | Sunday 15 February 2026 04:04:17 +0000 (0:00:14.890) 0:01:02.675 ******* 2026-02-15 04:04:28.620385 | orchestrator | 2026-02-15 04:04:28.620393 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-15 04:04:28.620402 | orchestrator | Sunday 15 February 2026 04:04:17 +0000 (0:00:00.070) 0:01:02.745 ******* 2026-02-15 04:04:28.620411 | orchestrator | 2026-02-15 04:04:28.620419 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-02-15 04:04:28.620426 | orchestrator | Sunday 15 February 2026 04:04:17 +0000 (0:00:00.070) 0:01:02.816 ******* 2026-02-15 04:04:28.620434 | orchestrator | 2026-02-15 04:04:28.620443 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-02-15 04:04:28.620452 | orchestrator | Sunday 15 February 2026 04:04:17 +0000 (0:00:00.074) 0:01:02.890 ******* 2026-02-15 04:04:28.620460 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:04:28.620469 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:04:28.620477 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:04:28.620485 | orchestrator | 2026-02-15 04:04:28.620493 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:04:28.620503 | orchestrator | testbed-node-0 : ok=21  changed=16  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 04:04:28.620522 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 04:04:28.620583 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 04:04:28.620590 | orchestrator | 2026-02-15 04:04:28.620595 | orchestrator | 2026-02-15 04:04:28.620600 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:04:28.620605 | orchestrator | Sunday 15 February 2026 04:04:28 +0000 (0:00:10.226) 0:01:13.117 ******* 2026-02-15 04:04:28.620611 | orchestrator | =============================================================================== 2026-02-15 04:04:28.620616 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.89s 2026-02-15 04:04:28.620637 | orchestrator | placement : Restart placement-api container ---------------------------- 10.23s 2026-02-15 04:04:28.620642 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.83s 2026-02-15 04:04:28.620648 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.36s 2026-02-15 04:04:28.620653 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.27s 2026-02-15 04:04:28.620658 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.82s 2026-02-15 04:04:28.620663 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.73s 2026-02-15 04:04:28.620669 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.47s 2026-02-15 04:04:28.620675 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.44s 2026-02-15 04:04:28.620681 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.40s 2026-02-15 04:04:28.620687 | orchestrator | placement : Creating placement databases -------------------------------- 2.28s 2026-02-15 04:04:28.620693 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.74s 2026-02-15 04:04:28.620699 | orchestrator | placement : Copying over config.json files for services ----------------- 1.71s 2026-02-15 04:04:28.620705 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.53s 2026-02-15 04:04:28.620710 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.41s 2026-02-15 04:04:28.620717 | orchestrator | placement : Check placement containers ---------------------------------- 1.25s 2026-02-15 04:04:28.620772 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.14s 2026-02-15 04:04:28.620784 | orchestrator | placement : Copying over existing policy file --------------------------- 0.79s 2026-02-15 04:04:28.620792 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.74s 2026-02-15 04:04:28.620802 | orchestrator | placement : include_tasks ----------------------------------------------- 0.60s 2026-02-15 04:04:31.139663 | orchestrator | 2026-02-15 04:04:31 | INFO  | Task 213673f2-221c-4186-bb8b-b3e8f5e927f7 (neutron) was prepared for execution. 2026-02-15 04:04:31.140146 | orchestrator | 2026-02-15 04:04:31 | INFO  | It takes a moment until task 213673f2-221c-4186-bb8b-b3e8f5e927f7 (neutron) has been started and output is visible here. 2026-02-15 04:05:22.967993 | orchestrator | 2026-02-15 04:05:22.968094 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:05:22.968108 | orchestrator | 2026-02-15 04:05:22.968115 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:05:22.968122 | orchestrator | Sunday 15 February 2026 04:04:35 +0000 (0:00:00.321) 0:00:00.321 ******* 2026-02-15 04:05:22.968128 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:05:22.968136 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:05:22.968142 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:05:22.968148 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:05:22.968154 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:05:22.968161 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:05:22.968190 | orchestrator | 2026-02-15 04:05:22.968196 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:05:22.968203 | orchestrator | Sunday 15 February 2026 04:04:36 +0000 (0:00:00.792) 0:00:01.114 ******* 2026-02-15 04:05:22.968210 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-02-15 04:05:22.968217 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-02-15 04:05:22.968239 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-02-15 04:05:22.968246 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-02-15 04:05:22.968252 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-02-15 04:05:22.968259 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-02-15 04:05:22.968266 | orchestrator | 2026-02-15 04:05:22.968273 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-02-15 04:05:22.968279 | orchestrator | 2026-02-15 04:05:22.968285 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-15 04:05:22.968292 | orchestrator | Sunday 15 February 2026 04:04:37 +0000 (0:00:00.733) 0:00:01.847 ******* 2026-02-15 04:05:22.968300 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:05:22.968307 | orchestrator | 2026-02-15 04:05:22.968314 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-02-15 04:05:22.968320 | orchestrator | Sunday 15 February 2026 04:04:38 +0000 (0:00:01.342) 0:00:03.190 ******* 2026-02-15 04:05:22.968326 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:05:22.968333 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:05:22.968339 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:05:22.968345 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:05:22.968351 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:05:22.968357 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:05:22.968363 | orchestrator | 2026-02-15 04:05:22.968369 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-02-15 04:05:22.968376 | orchestrator | Sunday 15 February 2026 04:04:39 +0000 (0:00:01.371) 0:00:04.561 ******* 2026-02-15 04:05:22.968382 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:05:22.968388 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:05:22.968395 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:05:22.968402 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:05:22.968408 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:05:22.968414 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:05:22.968421 | orchestrator | 2026-02-15 04:05:22.968427 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-02-15 04:05:22.968433 | orchestrator | Sunday 15 February 2026 04:04:41 +0000 (0:00:01.138) 0:00:05.700 ******* 2026-02-15 04:05:22.968440 | orchestrator | ok: [testbed-node-0] => { 2026-02-15 04:05:22.968448 | orchestrator |  "changed": false, 2026-02-15 04:05:22.968454 | orchestrator |  "msg": "All assertions passed" 2026-02-15 04:05:22.968460 | orchestrator | } 2026-02-15 04:05:22.968467 | orchestrator | ok: [testbed-node-1] => { 2026-02-15 04:05:22.968474 | orchestrator |  "changed": false, 2026-02-15 04:05:22.968480 | orchestrator |  "msg": "All assertions passed" 2026-02-15 04:05:22.968487 | orchestrator | } 2026-02-15 04:05:22.968493 | orchestrator | ok: [testbed-node-2] => { 2026-02-15 04:05:22.968500 | orchestrator |  "changed": false, 2026-02-15 04:05:22.968506 | orchestrator |  "msg": "All assertions passed" 2026-02-15 04:05:22.968513 | orchestrator | } 2026-02-15 04:05:22.968519 | orchestrator | ok: [testbed-node-3] => { 2026-02-15 04:05:22.968525 | orchestrator |  "changed": false, 2026-02-15 04:05:22.968533 | orchestrator |  "msg": "All assertions passed" 2026-02-15 04:05:22.968539 | orchestrator | } 2026-02-15 04:05:22.968545 | orchestrator | ok: [testbed-node-4] => { 2026-02-15 04:05:22.968555 | orchestrator |  "changed": false, 2026-02-15 04:05:22.968564 | orchestrator |  "msg": "All assertions passed" 2026-02-15 04:05:22.968573 | orchestrator | } 2026-02-15 04:05:22.968583 | orchestrator | ok: [testbed-node-5] => { 2026-02-15 04:05:22.968601 | orchestrator |  "changed": false, 2026-02-15 04:05:22.968612 | orchestrator |  "msg": "All assertions passed" 2026-02-15 04:05:22.968622 | orchestrator | } 2026-02-15 04:05:22.968633 | orchestrator | 2026-02-15 04:05:22.968643 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-02-15 04:05:22.968653 | orchestrator | Sunday 15 February 2026 04:04:41 +0000 (0:00:00.846) 0:00:06.546 ******* 2026-02-15 04:05:22.968664 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:22.968674 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:05:22.968684 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:05:22.968695 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:22.968706 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:22.968716 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:22.968726 | orchestrator | 2026-02-15 04:05:22.968754 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-02-15 04:05:22.968761 | orchestrator | Sunday 15 February 2026 04:04:42 +0000 (0:00:00.659) 0:00:07.206 ******* 2026-02-15 04:05:22.968767 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-02-15 04:05:22.968773 | orchestrator | 2026-02-15 04:05:22.968779 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-02-15 04:05:22.968785 | orchestrator | Sunday 15 February 2026 04:04:46 +0000 (0:00:04.339) 0:00:11.545 ******* 2026-02-15 04:05:22.968792 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-02-15 04:05:22.968799 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-02-15 04:05:22.968806 | orchestrator | 2026-02-15 04:05:22.968830 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-02-15 04:05:22.968837 | orchestrator | Sunday 15 February 2026 04:04:53 +0000 (0:00:07.010) 0:00:18.556 ******* 2026-02-15 04:05:22.968843 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:05:22.968849 | orchestrator | 2026-02-15 04:05:22.968856 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-02-15 04:05:22.968862 | orchestrator | Sunday 15 February 2026 04:04:57 +0000 (0:00:03.482) 0:00:22.038 ******* 2026-02-15 04:05:22.968868 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:05:22.968875 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-02-15 04:05:22.968881 | orchestrator | 2026-02-15 04:05:22.968887 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-02-15 04:05:22.968893 | orchestrator | Sunday 15 February 2026 04:05:01 +0000 (0:00:04.341) 0:00:26.380 ******* 2026-02-15 04:05:22.968900 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:05:22.968906 | orchestrator | 2026-02-15 04:05:22.968912 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-02-15 04:05:22.968926 | orchestrator | Sunday 15 February 2026 04:05:05 +0000 (0:00:03.445) 0:00:29.826 ******* 2026-02-15 04:05:22.968933 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-02-15 04:05:22.968939 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-02-15 04:05:22.968945 | orchestrator | 2026-02-15 04:05:22.968951 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-15 04:05:22.968957 | orchestrator | Sunday 15 February 2026 04:05:13 +0000 (0:00:08.446) 0:00:38.272 ******* 2026-02-15 04:05:22.968963 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:22.968969 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:05:22.968975 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:05:22.968981 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:22.968988 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:22.968994 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:22.969000 | orchestrator | 2026-02-15 04:05:22.969006 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-02-15 04:05:22.969019 | orchestrator | Sunday 15 February 2026 04:05:14 +0000 (0:00:00.873) 0:00:39.146 ******* 2026-02-15 04:05:22.969026 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:22.969032 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:05:22.969038 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:05:22.969044 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:22.969049 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:22.969055 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:22.969062 | orchestrator | 2026-02-15 04:05:22.969068 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-02-15 04:05:22.969074 | orchestrator | Sunday 15 February 2026 04:05:16 +0000 (0:00:02.200) 0:00:41.347 ******* 2026-02-15 04:05:22.969080 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:05:22.969086 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:05:22.969092 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:05:22.969099 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:05:22.969105 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:05:22.969111 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:05:22.969117 | orchestrator | 2026-02-15 04:05:22.969124 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-15 04:05:22.969130 | orchestrator | Sunday 15 February 2026 04:05:18 +0000 (0:00:01.240) 0:00:42.587 ******* 2026-02-15 04:05:22.969135 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:05:22.969142 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:22.969148 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:05:22.969154 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:22.969160 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:22.969166 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:22.969172 | orchestrator | 2026-02-15 04:05:22.969178 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-02-15 04:05:22.969184 | orchestrator | Sunday 15 February 2026 04:05:20 +0000 (0:00:02.245) 0:00:44.833 ******* 2026-02-15 04:05:22.969194 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:22.969212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:28.826244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:28.826375 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:28.826389 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:28.826396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:28.826404 | orchestrator | 2026-02-15 04:05:28.826412 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-02-15 04:05:28.826421 | orchestrator | Sunday 15 February 2026 04:05:22 +0000 (0:00:02.700) 0:00:47.533 ******* 2026-02-15 04:05:28.826428 | orchestrator | [WARNING]: Skipped 2026-02-15 04:05:28.826437 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-02-15 04:05:28.826445 | orchestrator | due to this access issue: 2026-02-15 04:05:28.826453 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-02-15 04:05:28.826460 | orchestrator | a directory 2026-02-15 04:05:28.826467 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:05:28.826475 | orchestrator | 2026-02-15 04:05:28.826482 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-15 04:05:28.826489 | orchestrator | Sunday 15 February 2026 04:05:23 +0000 (0:00:00.870) 0:00:48.403 ******* 2026-02-15 04:05:28.826504 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:05:28.826512 | orchestrator | 2026-02-15 04:05:28.826519 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-02-15 04:05:28.826544 | orchestrator | Sunday 15 February 2026 04:05:25 +0000 (0:00:01.443) 0:00:49.847 ******* 2026-02-15 04:05:28.826559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:28.826568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:28.826575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:28.826582 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:28.826597 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:33.992334 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:33.992438 | orchestrator | 2026-02-15 04:05:33.992454 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-02-15 04:05:33.992467 | orchestrator | Sunday 15 February 2026 04:05:28 +0000 (0:00:03.542) 0:00:53.389 ******* 2026-02-15 04:05:33.992481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:33.992495 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:05:33.992508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:33.992519 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:05:33.992531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:33.992562 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:33.992613 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:33.992627 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:33.992638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:33.992649 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:33.992660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:33.992671 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:33.992682 | orchestrator | 2026-02-15 04:05:33.992693 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-02-15 04:05:33.992704 | orchestrator | Sunday 15 February 2026 04:05:30 +0000 (0:00:02.053) 0:00:55.443 ******* 2026-02-15 04:05:33.992715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:33.992769 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:33.992788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:40.437811 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:05:40.437927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:40.437945 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:05:40.437957 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:40.437968 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:40.437979 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:40.438007 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:40.438072 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:40.438085 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:40.438094 | orchestrator | 2026-02-15 04:05:40.438106 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-02-15 04:05:40.438119 | orchestrator | Sunday 15 February 2026 04:05:33 +0000 (0:00:03.116) 0:00:58.560 ******* 2026-02-15 04:05:40.438129 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:05:40.438139 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:40.438149 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:05:40.438159 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:40.438170 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:40.438180 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:40.438191 | orchestrator | 2026-02-15 04:05:40.438203 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-02-15 04:05:40.438213 | orchestrator | Sunday 15 February 2026 04:05:36 +0000 (0:00:02.646) 0:01:01.207 ******* 2026-02-15 04:05:40.438224 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:40.438236 | orchestrator | 2026-02-15 04:05:40.438247 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-02-15 04:05:40.438271 | orchestrator | Sunday 15 February 2026 04:05:36 +0000 (0:00:00.149) 0:01:01.357 ******* 2026-02-15 04:05:40.438278 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:40.438284 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:05:40.438292 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:05:40.438305 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:40.438313 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:40.438320 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:40.438327 | orchestrator | 2026-02-15 04:05:40.438335 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-02-15 04:05:40.438343 | orchestrator | Sunday 15 February 2026 04:05:37 +0000 (0:00:00.661) 0:01:02.018 ******* 2026-02-15 04:05:40.438351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:40.438360 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:05:40.438367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:40.438382 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:40.438390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:40.438398 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:40.438406 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:40.438414 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:40.438431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:49.336067 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:05:49.336160 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:49.336189 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:49.336197 | orchestrator | 2026-02-15 04:05:49.336207 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-02-15 04:05:49.336219 | orchestrator | Sunday 15 February 2026 04:05:40 +0000 (0:00:02.979) 0:01:04.998 ******* 2026-02-15 04:05:49.336230 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:49.336243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:49.336268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:49.336297 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:49.336308 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:49.336327 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:49.336338 | orchestrator | 2026-02-15 04:05:49.336350 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-02-15 04:05:49.336360 | orchestrator | Sunday 15 February 2026 04:05:43 +0000 (0:00:03.250) 0:01:08.248 ******* 2026-02-15 04:05:49.336369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:49.336380 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:49.336393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:05:54.927617 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:54.927725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:54.927801 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:05:54.927822 | orchestrator | 2026-02-15 04:05:54.927845 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-02-15 04:05:54.927867 | orchestrator | Sunday 15 February 2026 04:05:49 +0000 (0:00:05.657) 0:01:13.906 ******* 2026-02-15 04:05:54.927908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:54.927931 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:05:54.927987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:54.928015 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:54.928027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:54.928038 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:05:54.928050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:05:54.928061 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:05:54.928073 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:54.928084 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:54.928104 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:05:54.928135 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:54.928147 | orchestrator | 2026-02-15 04:05:54.928158 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-02-15 04:05:54.928170 | orchestrator | Sunday 15 February 2026 04:05:51 +0000 (0:00:02.662) 0:01:16.568 ******* 2026-02-15 04:05:54.928181 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:05:54.928192 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:05:54.928211 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:05:54.928224 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:05:54.928236 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:05:54.928247 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:05:54.928261 | orchestrator | 2026-02-15 04:05:54.928278 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-02-15 04:05:54.928298 | orchestrator | Sunday 15 February 2026 04:05:54 +0000 (0:00:02.920) 0:01:19.489 ******* 2026-02-15 04:06:15.822493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:15.822589 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:15.822600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:15.822608 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:15.822615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:15.822622 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:15.822643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:06:15.822681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:06:15.822689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:06:15.822696 | orchestrator | 2026-02-15 04:06:15.822705 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-02-15 04:06:15.822713 | orchestrator | Sunday 15 February 2026 04:05:58 +0000 (0:00:03.793) 0:01:23.282 ******* 2026-02-15 04:06:15.822719 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:15.822726 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:15.822784 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:15.822792 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:15.822799 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:15.822806 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:15.822812 | orchestrator | 2026-02-15 04:06:15.822820 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-02-15 04:06:15.822827 | orchestrator | Sunday 15 February 2026 04:06:01 +0000 (0:00:02.370) 0:01:25.653 ******* 2026-02-15 04:06:15.822833 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:15.822840 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:15.822847 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:15.822853 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:15.822860 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:15.822867 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:15.822874 | orchestrator | 2026-02-15 04:06:15.822881 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-02-15 04:06:15.822894 | orchestrator | Sunday 15 February 2026 04:06:03 +0000 (0:00:02.348) 0:01:28.001 ******* 2026-02-15 04:06:15.822901 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:15.822908 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:15.822915 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:15.822921 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:15.822928 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:15.822934 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:15.822941 | orchestrator | 2026-02-15 04:06:15.822948 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-02-15 04:06:15.822954 | orchestrator | Sunday 15 February 2026 04:06:06 +0000 (0:00:02.728) 0:01:30.729 ******* 2026-02-15 04:06:15.822961 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:15.822968 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:15.822974 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:15.822981 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:15.822987 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:15.822994 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:15.823001 | orchestrator | 2026-02-15 04:06:15.823007 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-02-15 04:06:15.823014 | orchestrator | Sunday 15 February 2026 04:06:08 +0000 (0:00:02.338) 0:01:33.068 ******* 2026-02-15 04:06:15.823025 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:15.823032 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:15.823038 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:15.823045 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:15.823051 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:15.823058 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:15.823065 | orchestrator | 2026-02-15 04:06:15.823072 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-02-15 04:06:15.823078 | orchestrator | Sunday 15 February 2026 04:06:10 +0000 (0:00:02.494) 0:01:35.563 ******* 2026-02-15 04:06:15.823085 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:15.823091 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:15.823098 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:15.823105 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:15.823111 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:15.823118 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:15.823124 | orchestrator | 2026-02-15 04:06:15.823131 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-02-15 04:06:15.823138 | orchestrator | Sunday 15 February 2026 04:06:13 +0000 (0:00:02.358) 0:01:37.922 ******* 2026-02-15 04:06:15.823145 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-15 04:06:15.823151 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:15.823158 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-15 04:06:15.823165 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:15.823171 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-15 04:06:15.823178 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-15 04:06:15.823185 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:15.823192 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:15.823203 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-15 04:06:20.280476 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:20.280607 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-02-15 04:06:20.280625 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:20.280636 | orchestrator | 2026-02-15 04:06:20.280648 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-02-15 04:06:20.280693 | orchestrator | Sunday 15 February 2026 04:06:15 +0000 (0:00:02.457) 0:01:40.379 ******* 2026-02-15 04:06:20.280708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:06:20.280722 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:20.280853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:06:20.280866 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:20.280898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:06:20.280908 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:20.280920 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:20.280931 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:20.280970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:20.280992 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:20.281000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:20.281008 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:20.281016 | orchestrator | 2026-02-15 04:06:20.281024 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-02-15 04:06:20.281031 | orchestrator | Sunday 15 February 2026 04:06:17 +0000 (0:00:02.121) 0:01:42.501 ******* 2026-02-15 04:06:20.281039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:06:20.281046 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:20.281059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:06:20.281067 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:20.281083 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:06:48.167512 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.167674 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:48.167708 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:48.167726 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:48.167815 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:48.167859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:48.167881 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:48.167903 | orchestrator | 2026-02-15 04:06:48.167925 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-02-15 04:06:48.167946 | orchestrator | Sunday 15 February 2026 04:06:20 +0000 (0:00:02.347) 0:01:44.848 ******* 2026-02-15 04:06:48.167966 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.167986 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.168006 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:48.168027 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:48.168048 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:48.168066 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:48.168085 | orchestrator | 2026-02-15 04:06:48.168103 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-02-15 04:06:48.168155 | orchestrator | Sunday 15 February 2026 04:06:22 +0000 (0:00:02.202) 0:01:47.051 ******* 2026-02-15 04:06:48.168178 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.168197 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:48.168217 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.168235 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:06:48.168253 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:06:48.168271 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:06:48.168289 | orchestrator | 2026-02-15 04:06:48.168309 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-02-15 04:06:48.168328 | orchestrator | Sunday 15 February 2026 04:06:26 +0000 (0:00:04.129) 0:01:51.180 ******* 2026-02-15 04:06:48.168347 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:48.168365 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.168382 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.168400 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:48.168418 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:48.168436 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:48.168455 | orchestrator | 2026-02-15 04:06:48.168473 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-02-15 04:06:48.168491 | orchestrator | Sunday 15 February 2026 04:06:29 +0000 (0:00:02.444) 0:01:53.625 ******* 2026-02-15 04:06:48.168510 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:48.168529 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.168568 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.168602 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:48.168621 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:48.168639 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:48.168658 | orchestrator | 2026-02-15 04:06:48.168677 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-02-15 04:06:48.168721 | orchestrator | Sunday 15 February 2026 04:06:31 +0000 (0:00:02.285) 0:01:55.910 ******* 2026-02-15 04:06:48.168813 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:48.168833 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.168851 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.168870 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:48.168889 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:48.168907 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:48.168925 | orchestrator | 2026-02-15 04:06:48.168944 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-02-15 04:06:48.168962 | orchestrator | Sunday 15 February 2026 04:06:33 +0000 (0:00:02.290) 0:01:58.201 ******* 2026-02-15 04:06:48.168981 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.169000 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:48.169018 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.169036 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:48.169053 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:48.169072 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:48.169091 | orchestrator | 2026-02-15 04:06:48.169109 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-02-15 04:06:48.169127 | orchestrator | Sunday 15 February 2026 04:06:35 +0000 (0:00:02.274) 0:02:00.476 ******* 2026-02-15 04:06:48.169146 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.169165 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:48.169184 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.169202 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:48.169221 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:48.169239 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:48.169257 | orchestrator | 2026-02-15 04:06:48.169276 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-02-15 04:06:48.169294 | orchestrator | Sunday 15 February 2026 04:06:38 +0000 (0:00:02.666) 0:02:03.142 ******* 2026-02-15 04:06:48.169313 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.169350 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:48.169369 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.169387 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:48.169406 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:48.169424 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:48.169443 | orchestrator | 2026-02-15 04:06:48.169462 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-02-15 04:06:48.169480 | orchestrator | Sunday 15 February 2026 04:06:40 +0000 (0:00:02.301) 0:02:05.443 ******* 2026-02-15 04:06:48.169499 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.169517 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:48.169536 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.169558 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:48.169576 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:48.169594 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:48.169613 | orchestrator | 2026-02-15 04:06:48.169631 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-02-15 04:06:48.169650 | orchestrator | Sunday 15 February 2026 04:06:43 +0000 (0:00:02.609) 0:02:08.053 ******* 2026-02-15 04:06:48.169669 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-15 04:06:48.169689 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.169708 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-15 04:06:48.169728 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:48.169792 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-15 04:06:48.169812 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:48.169830 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-15 04:06:48.169849 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:48.169867 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-15 04:06:48.169886 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:48.169905 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-02-15 04:06:48.169923 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:48.169942 | orchestrator | 2026-02-15 04:06:48.169961 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-02-15 04:06:48.169979 | orchestrator | Sunday 15 February 2026 04:06:45 +0000 (0:00:02.099) 0:02:10.153 ******* 2026-02-15 04:06:48.170000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:06:48.170102 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:06:48.170147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:06:50.986204 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:06:50.986332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-02-15 04:06:50.986358 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:06:50.986399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:50.986418 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:06:50.986436 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:50.986453 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:06:50.986463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 04:06:50.986498 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:06:50.986509 | orchestrator | 2026-02-15 04:06:50.986521 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-02-15 04:06:50.986532 | orchestrator | Sunday 15 February 2026 04:06:48 +0000 (0:00:02.579) 0:02:12.733 ******* 2026-02-15 04:06:50.986562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:06:50.986577 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:06:50.986595 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:06:50.986609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-02-15 04:06:50.986621 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:06:50.986647 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-02-15 04:09:15.036847 | orchestrator | 2026-02-15 04:09:15.036971 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-02-15 04:09:15.036991 | orchestrator | Sunday 15 February 2026 04:06:50 +0000 (0:00:02.824) 0:02:15.557 ******* 2026-02-15 04:09:15.037005 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:09:15.037015 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:09:15.037022 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:09:15.037029 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:09:15.037036 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:09:15.037043 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:09:15.037050 | orchestrator | 2026-02-15 04:09:15.037057 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-02-15 04:09:15.037064 | orchestrator | Sunday 15 February 2026 04:06:51 +0000 (0:00:00.859) 0:02:16.417 ******* 2026-02-15 04:09:15.037072 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:09:15.037078 | orchestrator | 2026-02-15 04:09:15.037086 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-02-15 04:09:15.037093 | orchestrator | Sunday 15 February 2026 04:06:53 +0000 (0:00:02.150) 0:02:18.567 ******* 2026-02-15 04:09:15.037100 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:09:15.037107 | orchestrator | 2026-02-15 04:09:15.037114 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-02-15 04:09:15.037121 | orchestrator | Sunday 15 February 2026 04:06:56 +0000 (0:00:02.434) 0:02:21.002 ******* 2026-02-15 04:09:15.037128 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:09:15.037135 | orchestrator | 2026-02-15 04:09:15.037142 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-15 04:09:15.037163 | orchestrator | Sunday 15 February 2026 04:07:39 +0000 (0:00:43.519) 0:03:04.522 ******* 2026-02-15 04:09:15.037183 | orchestrator | 2026-02-15 04:09:15.037207 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-15 04:09:15.037223 | orchestrator | Sunday 15 February 2026 04:07:40 +0000 (0:00:00.072) 0:03:04.594 ******* 2026-02-15 04:09:15.037234 | orchestrator | 2026-02-15 04:09:15.037245 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-15 04:09:15.037255 | orchestrator | Sunday 15 February 2026 04:07:40 +0000 (0:00:00.073) 0:03:04.668 ******* 2026-02-15 04:09:15.037265 | orchestrator | 2026-02-15 04:09:15.037277 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-15 04:09:15.037289 | orchestrator | Sunday 15 February 2026 04:07:40 +0000 (0:00:00.072) 0:03:04.741 ******* 2026-02-15 04:09:15.037322 | orchestrator | 2026-02-15 04:09:15.037329 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-15 04:09:15.037336 | orchestrator | Sunday 15 February 2026 04:07:40 +0000 (0:00:00.074) 0:03:04.816 ******* 2026-02-15 04:09:15.037343 | orchestrator | 2026-02-15 04:09:15.037350 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-02-15 04:09:15.037356 | orchestrator | Sunday 15 February 2026 04:07:40 +0000 (0:00:00.073) 0:03:04.889 ******* 2026-02-15 04:09:15.037363 | orchestrator | 2026-02-15 04:09:15.037372 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-02-15 04:09:15.037380 | orchestrator | Sunday 15 February 2026 04:07:40 +0000 (0:00:00.079) 0:03:04.969 ******* 2026-02-15 04:09:15.037389 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:09:15.037397 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:09:15.037405 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:09:15.037413 | orchestrator | 2026-02-15 04:09:15.037421 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-02-15 04:09:15.037429 | orchestrator | Sunday 15 February 2026 04:08:10 +0000 (0:00:30.429) 0:03:35.398 ******* 2026-02-15 04:09:15.037437 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:09:15.037444 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:09:15.037452 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:09:15.037461 | orchestrator | 2026-02-15 04:09:15.037469 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:09:15.037479 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-15 04:09:15.037489 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-15 04:09:15.037497 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-02-15 04:09:15.037505 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-15 04:09:15.037513 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-15 04:09:15.037520 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-02-15 04:09:15.037529 | orchestrator | 2026-02-15 04:09:15.037537 | orchestrator | 2026-02-15 04:09:15.037545 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:09:15.037553 | orchestrator | Sunday 15 February 2026 04:09:14 +0000 (0:01:03.669) 0:04:39.068 ******* 2026-02-15 04:09:15.037561 | orchestrator | =============================================================================== 2026-02-15 04:09:15.037569 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 63.67s 2026-02-15 04:09:15.037577 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 43.52s 2026-02-15 04:09:15.037585 | orchestrator | neutron : Restart neutron-server container ----------------------------- 30.43s 2026-02-15 04:09:15.037609 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.45s 2026-02-15 04:09:15.037618 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.01s 2026-02-15 04:09:15.037625 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 5.66s 2026-02-15 04:09:15.037633 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.34s 2026-02-15 04:09:15.037642 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 4.34s 2026-02-15 04:09:15.037649 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.13s 2026-02-15 04:09:15.037658 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.79s 2026-02-15 04:09:15.037671 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 3.54s 2026-02-15 04:09:15.037679 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.48s 2026-02-15 04:09:15.037687 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.45s 2026-02-15 04:09:15.037696 | orchestrator | neutron : Copying over config.json files for services ------------------- 3.25s 2026-02-15 04:09:15.037704 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.12s 2026-02-15 04:09:15.037712 | orchestrator | neutron : Copying over existing policy file ----------------------------- 2.98s 2026-02-15 04:09:15.037720 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 2.92s 2026-02-15 04:09:15.037732 | orchestrator | neutron : Check neutron containers -------------------------------------- 2.82s 2026-02-15 04:09:15.037739 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 2.73s 2026-02-15 04:09:15.037746 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 2.70s 2026-02-15 04:09:17.659571 | orchestrator | 2026-02-15 04:09:17 | INFO  | Task b39148f9-fc56-48aa-bfe4-93a00a5231db (nova) was prepared for execution. 2026-02-15 04:09:17.659666 | orchestrator | 2026-02-15 04:09:17 | INFO  | It takes a moment until task b39148f9-fc56-48aa-bfe4-93a00a5231db (nova) has been started and output is visible here. 2026-02-15 04:11:23.801730 | orchestrator | 2026-02-15 04:11:23.801994 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:11:23.802116 | orchestrator | 2026-02-15 04:11:23.802133 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-02-15 04:11:23.802146 | orchestrator | Sunday 15 February 2026 04:09:22 +0000 (0:00:00.304) 0:00:00.304 ******* 2026-02-15 04:11:23.802158 | orchestrator | changed: [testbed-manager] 2026-02-15 04:11:23.802170 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.802182 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:11:23.802193 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:11:23.802204 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:11:23.802215 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:11:23.802226 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:11:23.802237 | orchestrator | 2026-02-15 04:11:23.802248 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:11:23.802259 | orchestrator | Sunday 15 February 2026 04:09:23 +0000 (0:00:00.945) 0:00:01.250 ******* 2026-02-15 04:11:23.802271 | orchestrator | changed: [testbed-manager] 2026-02-15 04:11:23.802281 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.802293 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:11:23.802304 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:11:23.802315 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:11:23.802325 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:11:23.802336 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:11:23.802347 | orchestrator | 2026-02-15 04:11:23.802358 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:11:23.802369 | orchestrator | Sunday 15 February 2026 04:09:24 +0000 (0:00:00.909) 0:00:02.159 ******* 2026-02-15 04:11:23.802381 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-02-15 04:11:23.802392 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-02-15 04:11:23.802403 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-02-15 04:11:23.802414 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-02-15 04:11:23.802425 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-02-15 04:11:23.802436 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-02-15 04:11:23.802447 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-02-15 04:11:23.802458 | orchestrator | 2026-02-15 04:11:23.802469 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-02-15 04:11:23.802506 | orchestrator | 2026-02-15 04:11:23.802518 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-15 04:11:23.802529 | orchestrator | Sunday 15 February 2026 04:09:25 +0000 (0:00:00.778) 0:00:02.938 ******* 2026-02-15 04:11:23.802540 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:11:23.802551 | orchestrator | 2026-02-15 04:11:23.802562 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-02-15 04:11:23.802573 | orchestrator | Sunday 15 February 2026 04:09:25 +0000 (0:00:00.825) 0:00:03.764 ******* 2026-02-15 04:11:23.802584 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-02-15 04:11:23.802596 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-02-15 04:11:23.802607 | orchestrator | 2026-02-15 04:11:23.802618 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-02-15 04:11:23.802629 | orchestrator | Sunday 15 February 2026 04:09:30 +0000 (0:00:04.396) 0:00:08.160 ******* 2026-02-15 04:11:23.802639 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-15 04:11:23.802651 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-15 04:11:23.802662 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.802673 | orchestrator | 2026-02-15 04:11:23.802684 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-15 04:11:23.802695 | orchestrator | Sunday 15 February 2026 04:09:34 +0000 (0:00:04.422) 0:00:12.582 ******* 2026-02-15 04:11:23.802706 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.802717 | orchestrator | 2026-02-15 04:11:23.802728 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-02-15 04:11:23.802739 | orchestrator | Sunday 15 February 2026 04:09:35 +0000 (0:00:00.669) 0:00:13.252 ******* 2026-02-15 04:11:23.802749 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.802760 | orchestrator | 2026-02-15 04:11:23.802771 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-02-15 04:11:23.802782 | orchestrator | Sunday 15 February 2026 04:09:36 +0000 (0:00:01.351) 0:00:14.603 ******* 2026-02-15 04:11:23.802793 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.802804 | orchestrator | 2026-02-15 04:11:23.802815 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-15 04:11:23.802826 | orchestrator | Sunday 15 February 2026 04:09:39 +0000 (0:00:02.806) 0:00:17.410 ******* 2026-02-15 04:11:23.802872 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:11:23.802892 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.802910 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.802928 | orchestrator | 2026-02-15 04:11:23.802947 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-15 04:11:23.802966 | orchestrator | Sunday 15 February 2026 04:09:39 +0000 (0:00:00.331) 0:00:17.741 ******* 2026-02-15 04:11:23.802986 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:11:23.803006 | orchestrator | 2026-02-15 04:11:23.803043 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-02-15 04:11:23.803062 | orchestrator | Sunday 15 February 2026 04:10:14 +0000 (0:00:34.246) 0:00:51.988 ******* 2026-02-15 04:11:23.803081 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.803102 | orchestrator | 2026-02-15 04:11:23.803121 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-15 04:11:23.803140 | orchestrator | Sunday 15 February 2026 04:10:29 +0000 (0:00:15.636) 0:01:07.624 ******* 2026-02-15 04:11:23.803158 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:11:23.803174 | orchestrator | 2026-02-15 04:11:23.803193 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-15 04:11:23.803213 | orchestrator | Sunday 15 February 2026 04:10:42 +0000 (0:00:12.851) 0:01:20.475 ******* 2026-02-15 04:11:23.803257 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:11:23.803270 | orchestrator | 2026-02-15 04:11:23.803280 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-02-15 04:11:23.803291 | orchestrator | Sunday 15 February 2026 04:10:43 +0000 (0:00:00.726) 0:01:21.202 ******* 2026-02-15 04:11:23.803314 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:11:23.803325 | orchestrator | 2026-02-15 04:11:23.803336 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-15 04:11:23.803347 | orchestrator | Sunday 15 February 2026 04:10:43 +0000 (0:00:00.491) 0:01:21.693 ******* 2026-02-15 04:11:23.803358 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:11:23.803369 | orchestrator | 2026-02-15 04:11:23.803380 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-02-15 04:11:23.803391 | orchestrator | Sunday 15 February 2026 04:10:44 +0000 (0:00:00.783) 0:01:22.476 ******* 2026-02-15 04:11:23.803401 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:11:23.803412 | orchestrator | 2026-02-15 04:11:23.803423 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-15 04:11:23.803434 | orchestrator | Sunday 15 February 2026 04:11:03 +0000 (0:00:18.574) 0:01:41.050 ******* 2026-02-15 04:11:23.803445 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:11:23.803456 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.803467 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.803477 | orchestrator | 2026-02-15 04:11:23.803488 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-02-15 04:11:23.803499 | orchestrator | 2026-02-15 04:11:23.803510 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-02-15 04:11:23.803521 | orchestrator | Sunday 15 February 2026 04:11:03 +0000 (0:00:00.347) 0:01:41.397 ******* 2026-02-15 04:11:23.803532 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:11:23.803542 | orchestrator | 2026-02-15 04:11:23.803553 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-02-15 04:11:23.803564 | orchestrator | Sunday 15 February 2026 04:11:04 +0000 (0:00:00.824) 0:01:42.222 ******* 2026-02-15 04:11:23.803575 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.803585 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.803596 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.803607 | orchestrator | 2026-02-15 04:11:23.803618 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-02-15 04:11:23.803629 | orchestrator | Sunday 15 February 2026 04:11:06 +0000 (0:00:02.308) 0:01:44.530 ******* 2026-02-15 04:11:23.803639 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.803650 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.803661 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.803671 | orchestrator | 2026-02-15 04:11:23.803682 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-15 04:11:23.803693 | orchestrator | Sunday 15 February 2026 04:11:08 +0000 (0:00:02.352) 0:01:46.883 ******* 2026-02-15 04:11:23.803704 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:11:23.803715 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.803725 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.803736 | orchestrator | 2026-02-15 04:11:23.803747 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-15 04:11:23.803757 | orchestrator | Sunday 15 February 2026 04:11:09 +0000 (0:00:00.625) 0:01:47.509 ******* 2026-02-15 04:11:23.803768 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-15 04:11:23.803779 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.803789 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-15 04:11:23.803800 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.803811 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-15 04:11:23.803822 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-02-15 04:11:23.803857 | orchestrator | 2026-02-15 04:11:23.803869 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-02-15 04:11:23.803880 | orchestrator | Sunday 15 February 2026 04:11:18 +0000 (0:00:08.504) 0:01:56.014 ******* 2026-02-15 04:11:23.803899 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:11:23.803910 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.803921 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.803932 | orchestrator | 2026-02-15 04:11:23.803943 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-02-15 04:11:23.803954 | orchestrator | Sunday 15 February 2026 04:11:18 +0000 (0:00:00.391) 0:01:56.406 ******* 2026-02-15 04:11:23.803964 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-15 04:11:23.803975 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:11:23.803986 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-15 04:11:23.803996 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.804007 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-15 04:11:23.804018 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.804028 | orchestrator | 2026-02-15 04:11:23.804039 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-15 04:11:23.804050 | orchestrator | Sunday 15 February 2026 04:11:19 +0000 (0:00:01.156) 0:01:57.562 ******* 2026-02-15 04:11:23.804061 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.804072 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.804082 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.804093 | orchestrator | 2026-02-15 04:11:23.804104 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-02-15 04:11:23.804115 | orchestrator | Sunday 15 February 2026 04:11:20 +0000 (0:00:00.504) 0:01:58.067 ******* 2026-02-15 04:11:23.804126 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.804137 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.804147 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:11:23.804158 | orchestrator | 2026-02-15 04:11:23.804169 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-02-15 04:11:23.804180 | orchestrator | Sunday 15 February 2026 04:11:21 +0000 (0:00:01.038) 0:01:59.106 ******* 2026-02-15 04:11:23.804191 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:11:23.804202 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:11:23.804220 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:12:45.313606 | orchestrator | 2026-02-15 04:12:45.313707 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-02-15 04:12:45.313720 | orchestrator | Sunday 15 February 2026 04:11:23 +0000 (0:00:02.603) 0:02:01.710 ******* 2026-02-15 04:12:45.313729 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:12:45.313740 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:12:45.313748 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:12:45.313757 | orchestrator | 2026-02-15 04:12:45.313766 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-15 04:12:45.313774 | orchestrator | Sunday 15 February 2026 04:11:46 +0000 (0:00:22.320) 0:02:24.030 ******* 2026-02-15 04:12:45.313781 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:12:45.313789 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:12:45.313796 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:12:45.313804 | orchestrator | 2026-02-15 04:12:45.313812 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-15 04:12:45.313820 | orchestrator | Sunday 15 February 2026 04:11:58 +0000 (0:00:12.464) 0:02:36.494 ******* 2026-02-15 04:12:45.313827 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:12:45.313835 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:12:45.313843 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:12:45.313851 | orchestrator | 2026-02-15 04:12:45.313859 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-02-15 04:12:45.313910 | orchestrator | Sunday 15 February 2026 04:11:59 +0000 (0:00:01.195) 0:02:37.690 ******* 2026-02-15 04:12:45.313918 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:12:45.313925 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:12:45.313933 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:12:45.313963 | orchestrator | 2026-02-15 04:12:45.313971 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-02-15 04:12:45.313978 | orchestrator | Sunday 15 February 2026 04:12:12 +0000 (0:00:13.216) 0:02:50.906 ******* 2026-02-15 04:12:45.313985 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:12:45.313992 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:12:45.313999 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:12:45.314006 | orchestrator | 2026-02-15 04:12:45.314061 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-02-15 04:12:45.314071 | orchestrator | Sunday 15 February 2026 04:12:14 +0000 (0:00:01.153) 0:02:52.059 ******* 2026-02-15 04:12:45.314079 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:12:45.314085 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:12:45.314093 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:12:45.314100 | orchestrator | 2026-02-15 04:12:45.314108 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-02-15 04:12:45.314115 | orchestrator | 2026-02-15 04:12:45.314123 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-15 04:12:45.314131 | orchestrator | Sunday 15 February 2026 04:12:14 +0000 (0:00:00.354) 0:02:52.414 ******* 2026-02-15 04:12:45.314138 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:12:45.314146 | orchestrator | 2026-02-15 04:12:45.314154 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-02-15 04:12:45.314162 | orchestrator | Sunday 15 February 2026 04:12:15 +0000 (0:00:00.846) 0:02:53.261 ******* 2026-02-15 04:12:45.314170 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-02-15 04:12:45.314179 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-02-15 04:12:45.314188 | orchestrator | 2026-02-15 04:12:45.314196 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-02-15 04:12:45.314204 | orchestrator | Sunday 15 February 2026 04:12:18 +0000 (0:00:03.447) 0:02:56.708 ******* 2026-02-15 04:12:45.314213 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-02-15 04:12:45.314222 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-02-15 04:12:45.314277 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-02-15 04:12:45.314287 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-02-15 04:12:45.314297 | orchestrator | 2026-02-15 04:12:45.314305 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-02-15 04:12:45.314313 | orchestrator | Sunday 15 February 2026 04:12:25 +0000 (0:00:06.480) 0:03:03.189 ******* 2026-02-15 04:12:45.314321 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:12:45.314329 | orchestrator | 2026-02-15 04:12:45.314337 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-02-15 04:12:45.314346 | orchestrator | Sunday 15 February 2026 04:12:28 +0000 (0:00:03.234) 0:03:06.423 ******* 2026-02-15 04:12:45.314353 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:12:45.314362 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-02-15 04:12:45.314370 | orchestrator | 2026-02-15 04:12:45.314377 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-02-15 04:12:45.314388 | orchestrator | Sunday 15 February 2026 04:12:32 +0000 (0:00:04.369) 0:03:10.793 ******* 2026-02-15 04:12:45.314397 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:12:45.314405 | orchestrator | 2026-02-15 04:12:45.314412 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-02-15 04:12:45.314420 | orchestrator | Sunday 15 February 2026 04:12:36 +0000 (0:00:03.250) 0:03:14.044 ******* 2026-02-15 04:12:45.314429 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-02-15 04:12:45.314448 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-02-15 04:12:45.314457 | orchestrator | 2026-02-15 04:12:45.314465 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-02-15 04:12:45.314493 | orchestrator | Sunday 15 February 2026 04:12:43 +0000 (0:00:07.839) 0:03:21.884 ******* 2026-02-15 04:12:45.314507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:12:45.314519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:12:45.314532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:12:45.314553 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:12:50.057533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:12:50.057633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:12:50.057643 | orchestrator | 2026-02-15 04:12:50.057651 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-02-15 04:12:50.057659 | orchestrator | Sunday 15 February 2026 04:12:45 +0000 (0:00:01.340) 0:03:23.225 ******* 2026-02-15 04:12:50.057665 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:12:50.057672 | orchestrator | 2026-02-15 04:12:50.057678 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-02-15 04:12:50.057684 | orchestrator | Sunday 15 February 2026 04:12:45 +0000 (0:00:00.133) 0:03:23.358 ******* 2026-02-15 04:12:50.057690 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:12:50.057696 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:12:50.057702 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:12:50.057708 | orchestrator | 2026-02-15 04:12:50.057714 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-02-15 04:12:50.057720 | orchestrator | Sunday 15 February 2026 04:12:45 +0000 (0:00:00.343) 0:03:23.702 ******* 2026-02-15 04:12:50.057726 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:12:50.057732 | orchestrator | 2026-02-15 04:12:50.057738 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-02-15 04:12:50.057744 | orchestrator | Sunday 15 February 2026 04:12:46 +0000 (0:00:00.740) 0:03:24.442 ******* 2026-02-15 04:12:50.057753 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:12:50.057763 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:12:50.057773 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:12:50.057785 | orchestrator | 2026-02-15 04:12:50.057799 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-02-15 04:12:50.057809 | orchestrator | Sunday 15 February 2026 04:12:47 +0000 (0:00:00.568) 0:03:25.010 ******* 2026-02-15 04:12:50.057819 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:12:50.057830 | orchestrator | 2026-02-15 04:12:50.057839 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-15 04:12:50.057905 | orchestrator | Sunday 15 February 2026 04:12:47 +0000 (0:00:00.621) 0:03:25.632 ******* 2026-02-15 04:12:50.057936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:12:50.057968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:12:50.057977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:12:50.057983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:12:50.057999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:12:50.058006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:12:50.058052 | orchestrator | 2026-02-15 04:12:50.058067 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-15 04:12:51.863159 | orchestrator | Sunday 15 February 2026 04:12:50 +0000 (0:00:02.329) 0:03:27.961 ******* 2026-02-15 04:12:51.863301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 04:12:51.863338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:12:51.863361 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:12:51.863385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 04:12:51.863444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:12:51.863458 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:12:51.863491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 04:12:51.863505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:12:51.863516 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:12:51.863528 | orchestrator | 2026-02-15 04:12:51.863541 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-15 04:12:51.863553 | orchestrator | Sunday 15 February 2026 04:12:50 +0000 (0:00:00.952) 0:03:28.914 ******* 2026-02-15 04:12:51.863566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 04:12:51.863604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:12:51.863625 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:12:51.863656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 04:12:54.246227 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:12:54.246310 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:12:54.246321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 04:12:54.246358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:12:54.246365 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:12:54.246371 | orchestrator | 2026-02-15 04:12:54.246379 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-02-15 04:12:54.246386 | orchestrator | Sunday 15 February 2026 04:12:51 +0000 (0:00:00.862) 0:03:29.777 ******* 2026-02-15 04:12:54.246393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:12:54.246413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:12:54.246425 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:12:54.246436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:12:54.246443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:12:54.246453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:01.542936 | orchestrator | 2026-02-15 04:13:01.543033 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-02-15 04:13:01.543048 | orchestrator | Sunday 15 February 2026 04:12:54 +0000 (0:00:02.385) 0:03:32.162 ******* 2026-02-15 04:13:01.543062 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:13:01.543111 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:13:01.543122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:13:01.543146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:01.543164 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:01.543173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:01.543181 | orchestrator | 2026-02-15 04:13:01.543190 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-02-15 04:13:01.543198 | orchestrator | Sunday 15 February 2026 04:13:00 +0000 (0:00:06.622) 0:03:38.784 ******* 2026-02-15 04:13:01.543211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 04:13:01.543221 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:13:01.543229 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:13:01.543247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 04:13:06.063839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:13:06.063988 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:13:06.064026 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-02-15 04:13:06.064043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:13:06.064054 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:13:06.064066 | orchestrator | 2026-02-15 04:13:06.064079 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-02-15 04:13:06.064092 | orchestrator | Sunday 15 February 2026 04:13:01 +0000 (0:00:00.674) 0:03:39.459 ******* 2026-02-15 04:13:06.064103 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:13:06.064114 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:13:06.064125 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:13:06.064135 | orchestrator | 2026-02-15 04:13:06.064147 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-02-15 04:13:06.064158 | orchestrator | Sunday 15 February 2026 04:13:03 +0000 (0:00:01.654) 0:03:41.114 ******* 2026-02-15 04:13:06.064193 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:13:06.064204 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:13:06.064215 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:13:06.064226 | orchestrator | 2026-02-15 04:13:06.064237 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-02-15 04:13:06.064248 | orchestrator | Sunday 15 February 2026 04:13:03 +0000 (0:00:00.354) 0:03:41.468 ******* 2026-02-15 04:13:06.064279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:13:06.064299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:13:06.064313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.2.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-02-15 04:13:06.064334 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:06.064347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:06.064367 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:48.574852 | orchestrator | 2026-02-15 04:13:48.575078 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-15 04:13:48.575104 | orchestrator | Sunday 15 February 2026 04:13:05 +0000 (0:00:02.030) 0:03:43.499 ******* 2026-02-15 04:13:48.575122 | orchestrator | 2026-02-15 04:13:48.575139 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-15 04:13:48.575156 | orchestrator | Sunday 15 February 2026 04:13:05 +0000 (0:00:00.154) 0:03:43.653 ******* 2026-02-15 04:13:48.575172 | orchestrator | 2026-02-15 04:13:48.575190 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-02-15 04:13:48.575207 | orchestrator | Sunday 15 February 2026 04:13:05 +0000 (0:00:00.172) 0:03:43.826 ******* 2026-02-15 04:13:48.575223 | orchestrator | 2026-02-15 04:13:48.575239 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-02-15 04:13:48.575256 | orchestrator | Sunday 15 February 2026 04:13:06 +0000 (0:00:00.143) 0:03:43.969 ******* 2026-02-15 04:13:48.575272 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:13:48.575291 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:13:48.575308 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:13:48.575324 | orchestrator | 2026-02-15 04:13:48.575341 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-02-15 04:13:48.575358 | orchestrator | Sunday 15 February 2026 04:13:24 +0000 (0:00:18.737) 0:04:02.707 ******* 2026-02-15 04:13:48.575376 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:13:48.575394 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:13:48.575429 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:13:48.575446 | orchestrator | 2026-02-15 04:13:48.575463 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-02-15 04:13:48.575480 | orchestrator | 2026-02-15 04:13:48.575497 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-15 04:13:48.575514 | orchestrator | Sunday 15 February 2026 04:13:35 +0000 (0:00:10.452) 0:04:13.159 ******* 2026-02-15 04:13:48.575557 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:13:48.575576 | orchestrator | 2026-02-15 04:13:48.575593 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-15 04:13:48.575611 | orchestrator | Sunday 15 February 2026 04:13:36 +0000 (0:00:01.326) 0:04:14.485 ******* 2026-02-15 04:13:48.575627 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:13:48.575643 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:13:48.575659 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:13:48.575675 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:13:48.575692 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:13:48.575709 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:13:48.575725 | orchestrator | 2026-02-15 04:13:48.575741 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-02-15 04:13:48.575757 | orchestrator | Sunday 15 February 2026 04:13:37 +0000 (0:00:00.814) 0:04:15.300 ******* 2026-02-15 04:13:48.575774 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:13:48.575790 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:13:48.575806 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:13:48.575822 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:13:48.575839 | orchestrator | 2026-02-15 04:13:48.575857 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-15 04:13:48.575873 | orchestrator | Sunday 15 February 2026 04:13:38 +0000 (0:00:00.949) 0:04:16.250 ******* 2026-02-15 04:13:48.575890 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-02-15 04:13:48.575939 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-02-15 04:13:48.575955 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-02-15 04:13:48.575971 | orchestrator | 2026-02-15 04:13:48.575987 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-15 04:13:48.576003 | orchestrator | Sunday 15 February 2026 04:13:39 +0000 (0:00:00.942) 0:04:17.193 ******* 2026-02-15 04:13:48.576020 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-02-15 04:13:48.576036 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-02-15 04:13:48.576053 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-02-15 04:13:48.576069 | orchestrator | 2026-02-15 04:13:48.576085 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-15 04:13:48.576103 | orchestrator | Sunday 15 February 2026 04:13:40 +0000 (0:00:01.252) 0:04:18.445 ******* 2026-02-15 04:13:48.576120 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-02-15 04:13:48.576136 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:13:48.576151 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-02-15 04:13:48.576165 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:13:48.576175 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-02-15 04:13:48.576184 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:13:48.576194 | orchestrator | 2026-02-15 04:13:48.576203 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-02-15 04:13:48.576213 | orchestrator | Sunday 15 February 2026 04:13:41 +0000 (0:00:00.632) 0:04:19.078 ******* 2026-02-15 04:13:48.576222 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-15 04:13:48.576232 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-15 04:13:48.576241 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 04:13:48.576251 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 04:13:48.576260 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:13:48.576269 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 04:13:48.576290 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 04:13:48.576299 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:13:48.576328 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 04:13:48.576338 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 04:13:48.576348 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-02-15 04:13:48.576358 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:13:48.576367 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-15 04:13:48.576377 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-15 04:13:48.576387 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-02-15 04:13:48.576396 | orchestrator | 2026-02-15 04:13:48.576406 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-02-15 04:13:48.576416 | orchestrator | Sunday 15 February 2026 04:13:42 +0000 (0:00:01.373) 0:04:20.451 ******* 2026-02-15 04:13:48.576425 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:13:48.576434 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:13:48.576444 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:13:48.576453 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:13:48.576463 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:13:48.576472 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:13:48.576482 | orchestrator | 2026-02-15 04:13:48.576490 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-02-15 04:13:48.576504 | orchestrator | Sunday 15 February 2026 04:13:43 +0000 (0:00:01.237) 0:04:21.689 ******* 2026-02-15 04:13:48.576512 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:13:48.576520 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:13:48.576528 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:13:48.576535 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:13:48.576543 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:13:48.576551 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:13:48.576559 | orchestrator | 2026-02-15 04:13:48.576567 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-02-15 04:13:48.576575 | orchestrator | Sunday 15 February 2026 04:13:46 +0000 (0:00:02.840) 0:04:24.529 ******* 2026-02-15 04:13:48.576585 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:13:48.576596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:13:48.576616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388084 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388226 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388353 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388385 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:50.388490 | orchestrator | 2026-02-15 04:13:50.388503 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-15 04:13:50.388517 | orchestrator | Sunday 15 February 2026 04:13:49 +0000 (0:00:02.428) 0:04:26.958 ******* 2026-02-15 04:13:50.388529 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:13:50.388541 | orchestrator | 2026-02-15 04:13:50.388553 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-02-15 04:13:50.388573 | orchestrator | Sunday 15 February 2026 04:13:50 +0000 (0:00:01.346) 0:04:28.305 ******* 2026-02-15 04:13:53.897234 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897396 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897428 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897550 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897562 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:53.897628 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:55.699666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:55.699789 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:13:55.699843 | orchestrator | 2026-02-15 04:13:55.699865 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-02-15 04:13:55.699884 | orchestrator | Sunday 15 February 2026 04:13:54 +0000 (0:00:03.889) 0:04:32.194 ******* 2026-02-15 04:13:55.700051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:13:55.700077 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:13:55.700098 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-15 04:13:55.700118 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:13:55.700177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:13:55.700201 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:13:55.700236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-15 04:13:55.700256 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:13:55.700277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:13:55.700330 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:13:55.700370 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-15 04:13:57.874525 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:13:57.874631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-15 04:13:57.874676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-15 04:13:57.874690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:13:57.874702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:13:57.874714 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:13:57.874725 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:13:57.874737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-15 04:13:57.874749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:13:57.874760 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:13:57.874772 | orchestrator | 2026-02-15 04:13:57.874799 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-02-15 04:13:57.874813 | orchestrator | Sunday 15 February 2026 04:13:56 +0000 (0:00:01.782) 0:04:33.978 ******* 2026-02-15 04:13:57.874843 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:13:57.874864 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:13:57.874876 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-15 04:13:57.874986 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:13:57.875002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:13:57.875014 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:13:57.875041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-15 04:14:05.821411 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:14:05.821523 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:14:05.821542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:14:05.821554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-15 04:14:05.821565 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:14:05.821577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-15 04:14:05.821604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:14:05.821633 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:05.821660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-15 04:14:05.821672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:14:05.821682 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:05.821692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-15 04:14:05.821703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:14:05.821713 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:05.821723 | orchestrator | 2026-02-15 04:14:05.821734 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-15 04:14:05.821745 | orchestrator | Sunday 15 February 2026 04:13:58 +0000 (0:00:02.582) 0:04:36.561 ******* 2026-02-15 04:14:05.821755 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:05.821765 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:05.821775 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:05.821785 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:14:05.821795 | orchestrator | 2026-02-15 04:14:05.821805 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-02-15 04:14:05.821816 | orchestrator | Sunday 15 February 2026 04:13:59 +0000 (0:00:00.940) 0:04:37.501 ******* 2026-02-15 04:14:05.821826 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 04:14:05.821836 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-15 04:14:05.821846 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-15 04:14:05.821863 | orchestrator | 2026-02-15 04:14:05.821873 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-02-15 04:14:05.821883 | orchestrator | Sunday 15 February 2026 04:14:00 +0000 (0:00:01.217) 0:04:38.718 ******* 2026-02-15 04:14:05.821893 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 04:14:05.821922 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-15 04:14:05.821933 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-15 04:14:05.821943 | orchestrator | 2026-02-15 04:14:05.821954 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-02-15 04:14:05.821966 | orchestrator | Sunday 15 February 2026 04:14:01 +0000 (0:00:01.044) 0:04:39.762 ******* 2026-02-15 04:14:05.821977 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:14:05.821989 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:14:05.821999 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:14:05.822011 | orchestrator | 2026-02-15 04:14:05.822089 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-02-15 04:14:05.822110 | orchestrator | Sunday 15 February 2026 04:14:02 +0000 (0:00:00.695) 0:04:40.458 ******* 2026-02-15 04:14:05.822120 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:14:05.822130 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:14:05.822145 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:14:05.822156 | orchestrator | 2026-02-15 04:14:05.822165 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-02-15 04:14:05.822175 | orchestrator | Sunday 15 February 2026 04:14:03 +0000 (0:00:00.552) 0:04:41.011 ******* 2026-02-15 04:14:05.822185 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-15 04:14:05.822195 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-15 04:14:05.822205 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-15 04:14:05.822215 | orchestrator | 2026-02-15 04:14:05.822224 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-02-15 04:14:05.822234 | orchestrator | Sunday 15 February 2026 04:14:04 +0000 (0:00:01.465) 0:04:42.476 ******* 2026-02-15 04:14:05.822251 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-15 04:14:25.173560 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-15 04:14:25.173677 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-15 04:14:25.173692 | orchestrator | 2026-02-15 04:14:25.173705 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-02-15 04:14:25.173718 | orchestrator | Sunday 15 February 2026 04:14:05 +0000 (0:00:01.258) 0:04:43.735 ******* 2026-02-15 04:14:25.173728 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-02-15 04:14:25.173738 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-02-15 04:14:25.173748 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-02-15 04:14:25.173757 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-02-15 04:14:25.173767 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-02-15 04:14:25.173777 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-02-15 04:14:25.173786 | orchestrator | 2026-02-15 04:14:25.173796 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-02-15 04:14:25.173806 | orchestrator | Sunday 15 February 2026 04:14:09 +0000 (0:00:04.018) 0:04:47.754 ******* 2026-02-15 04:14:25.173817 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:14:25.173827 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:14:25.173837 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:14:25.173847 | orchestrator | 2026-02-15 04:14:25.173857 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-02-15 04:14:25.173867 | orchestrator | Sunday 15 February 2026 04:14:10 +0000 (0:00:00.347) 0:04:48.101 ******* 2026-02-15 04:14:25.173877 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:14:25.173887 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:14:25.173897 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:14:25.173907 | orchestrator | 2026-02-15 04:14:25.173973 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-02-15 04:14:25.173983 | orchestrator | Sunday 15 February 2026 04:14:10 +0000 (0:00:00.596) 0:04:48.698 ******* 2026-02-15 04:14:25.173993 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:14:25.174003 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:14:25.174012 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:14:25.174071 | orchestrator | 2026-02-15 04:14:25.174081 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-02-15 04:14:25.174091 | orchestrator | Sunday 15 February 2026 04:14:12 +0000 (0:00:01.321) 0:04:50.019 ******* 2026-02-15 04:14:25.174104 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-15 04:14:25.174117 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-15 04:14:25.174128 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-02-15 04:14:25.174140 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-15 04:14:25.174152 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-15 04:14:25.174163 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-02-15 04:14:25.174174 | orchestrator | 2026-02-15 04:14:25.174185 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-02-15 04:14:25.174197 | orchestrator | Sunday 15 February 2026 04:14:15 +0000 (0:00:03.541) 0:04:53.561 ******* 2026-02-15 04:14:25.174208 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-15 04:14:25.174219 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-15 04:14:25.174230 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-15 04:14:25.174241 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-02-15 04:14:25.174252 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:14:25.174263 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-02-15 04:14:25.174273 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:14:25.174284 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-02-15 04:14:25.174296 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:14:25.174307 | orchestrator | 2026-02-15 04:14:25.174322 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-02-15 04:14:25.174340 | orchestrator | Sunday 15 February 2026 04:14:19 +0000 (0:00:03.566) 0:04:57.127 ******* 2026-02-15 04:14:25.174360 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:14:25.174383 | orchestrator | 2026-02-15 04:14:25.174400 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-02-15 04:14:25.174416 | orchestrator | Sunday 15 February 2026 04:14:19 +0000 (0:00:00.136) 0:04:57.263 ******* 2026-02-15 04:14:25.174433 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:14:25.174466 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:14:25.174486 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:14:25.174501 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:25.174518 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:25.174534 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:25.174550 | orchestrator | 2026-02-15 04:14:25.174560 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-02-15 04:14:25.174570 | orchestrator | Sunday 15 February 2026 04:14:20 +0000 (0:00:00.855) 0:04:58.119 ******* 2026-02-15 04:14:25.174580 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 04:14:25.174589 | orchestrator | 2026-02-15 04:14:25.174599 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-02-15 04:14:25.174609 | orchestrator | Sunday 15 February 2026 04:14:20 +0000 (0:00:00.713) 0:04:58.832 ******* 2026-02-15 04:14:25.174630 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:14:25.174658 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:14:25.174669 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:14:25.174678 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:25.174688 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:25.174697 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:25.174707 | orchestrator | 2026-02-15 04:14:25.174716 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-02-15 04:14:25.174726 | orchestrator | Sunday 15 February 2026 04:14:21 +0000 (0:00:00.826) 0:04:59.659 ******* 2026-02-15 04:14:25.174740 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:14:25.174754 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:14:25.174764 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:14:25.174780 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:14:25.174806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684696 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684723 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684901 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684978 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:31.684998 | orchestrator | 2026-02-15 04:14:31.685020 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-02-15 04:14:31.685041 | orchestrator | Sunday 15 February 2026 04:14:25 +0000 (0:00:03.654) 0:05:03.313 ******* 2026-02-15 04:14:31.685070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:14:31.685104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:14:31.685136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:14:32.450758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:14:32.450884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:14:32.450909 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:14:32.451051 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:32.451074 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:14:32.451118 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:32.451138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:14:32.451154 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:32.451172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:14:32.451208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:32.451226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:32.451236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:14:32.451247 | orchestrator | 2026-02-15 04:14:32.451259 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-02-15 04:14:32.451279 | orchestrator | Sunday 15 February 2026 04:14:32 +0000 (0:00:07.051) 0:05:10.365 ******* 2026-02-15 04:14:55.211558 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:14:55.211669 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:14:55.211684 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:14:55.211695 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:55.211706 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:55.211716 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:55.211726 | orchestrator | 2026-02-15 04:14:55.211737 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-02-15 04:14:55.211749 | orchestrator | Sunday 15 February 2026 04:14:33 +0000 (0:00:01.522) 0:05:11.888 ******* 2026-02-15 04:14:55.211759 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-15 04:14:55.211770 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-15 04:14:55.211780 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-15 04:14:55.211790 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-02-15 04:14:55.211799 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-15 04:14:55.211809 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-02-15 04:14:55.211819 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-15 04:14:55.211830 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:55.211865 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-15 04:14:55.211875 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:55.211885 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-02-15 04:14:55.211895 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:55.211906 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-15 04:14:55.211916 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-15 04:14:55.211958 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-02-15 04:14:55.211975 | orchestrator | 2026-02-15 04:14:55.211994 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-02-15 04:14:55.212010 | orchestrator | Sunday 15 February 2026 04:14:37 +0000 (0:00:03.913) 0:05:15.802 ******* 2026-02-15 04:14:55.212026 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:14:55.212037 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:14:55.212047 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:14:55.212056 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:55.212068 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:55.212080 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:55.212091 | orchestrator | 2026-02-15 04:14:55.212102 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-02-15 04:14:55.212113 | orchestrator | Sunday 15 February 2026 04:14:38 +0000 (0:00:00.703) 0:05:16.506 ******* 2026-02-15 04:14:55.212124 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-15 04:14:55.212151 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-15 04:14:55.212163 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-15 04:14:55.212172 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-15 04:14:55.212182 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-02-15 04:14:55.212192 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-15 04:14:55.212201 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-02-15 04:14:55.212211 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-15 04:14:55.212220 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-15 04:14:55.212229 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:55.212239 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-02-15 04:14:55.212249 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-15 04:14:55.212258 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:55.212268 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-15 04:14:55.212277 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-02-15 04:14:55.212287 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:55.212296 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-15 04:14:55.212323 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-02-15 04:14:55.212341 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-15 04:14:55.212351 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-15 04:14:55.212361 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-02-15 04:14:55.212371 | orchestrator | 2026-02-15 04:14:55.212381 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-02-15 04:14:55.212390 | orchestrator | Sunday 15 February 2026 04:14:44 +0000 (0:00:05.745) 0:05:22.252 ******* 2026-02-15 04:14:55.212400 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-15 04:14:55.212410 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-15 04:14:55.212419 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-02-15 04:14:55.212429 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-15 04:14:55.212439 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-15 04:14:55.212448 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-15 04:14:55.212458 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-02-15 04:14:55.212468 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-15 04:14:55.212478 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-02-15 04:14:55.212493 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-15 04:14:55.212515 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-15 04:14:55.212536 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-15 04:14:55.212551 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-15 04:14:55.212567 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-15 04:14:55.212582 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:55.212597 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-15 04:14:55.212612 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:55.212626 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-02-15 04:14:55.212641 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:55.212655 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-15 04:14:55.212670 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-02-15 04:14:55.212694 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-15 04:14:55.212709 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-15 04:14:55.212724 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-02-15 04:14:55.212739 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-15 04:14:55.212755 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-15 04:14:55.212771 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-02-15 04:14:55.212786 | orchestrator | 2026-02-15 04:14:55.212916 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-02-15 04:14:55.212972 | orchestrator | Sunday 15 February 2026 04:14:51 +0000 (0:00:07.217) 0:05:29.469 ******* 2026-02-15 04:14:55.212994 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:14:55.213004 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:14:55.213014 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:14:55.213024 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:55.213033 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:55.213043 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:55.213052 | orchestrator | 2026-02-15 04:14:55.213062 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-02-15 04:14:55.213072 | orchestrator | Sunday 15 February 2026 04:14:52 +0000 (0:00:00.840) 0:05:30.309 ******* 2026-02-15 04:14:55.213082 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:14:55.213091 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:14:55.213101 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:14:55.213111 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:55.213121 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:55.213130 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:55.213140 | orchestrator | 2026-02-15 04:14:55.213150 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-02-15 04:14:55.213160 | orchestrator | Sunday 15 February 2026 04:14:53 +0000 (0:00:00.703) 0:05:31.013 ******* 2026-02-15 04:14:55.213170 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:14:55.213179 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:14:55.213189 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:14:55.213199 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:14:55.213209 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:14:55.213218 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:14:55.213227 | orchestrator | 2026-02-15 04:14:55.213250 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-02-15 04:14:56.388542 | orchestrator | Sunday 15 February 2026 04:14:55 +0000 (0:00:02.100) 0:05:33.114 ******* 2026-02-15 04:14:56.388633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:14:56.388648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:14:56.388658 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-15 04:14:56.388694 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:14:56.388708 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:14:56.389486 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:14:56.389512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-02-15 04:14:56.389519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-02-15 04:14:56.389523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-15 04:14:56.389541 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-02-15 04:14:56.389545 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:14:56.389550 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:14:56.389555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-15 04:14:56.389564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:15:00.228565 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:15:00.228703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-15 04:15:00.228731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:15:00.228744 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:15:00.228756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-02-15 04:15:00.228807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:15:00.228820 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:15:00.228832 | orchestrator | 2026-02-15 04:15:00.228845 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-02-15 04:15:00.228857 | orchestrator | Sunday 15 February 2026 04:14:56 +0000 (0:00:01.541) 0:05:34.655 ******* 2026-02-15 04:15:00.228869 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-15 04:15:00.228880 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-15 04:15:00.228891 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:15:00.228902 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-15 04:15:00.228914 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-15 04:15:00.228978 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:15:00.228992 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-15 04:15:00.229003 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-15 04:15:00.229015 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:15:00.229026 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-15 04:15:00.229037 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-15 04:15:00.229048 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:15:00.229058 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-15 04:15:00.229070 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-15 04:15:00.229088 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:15:00.229108 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-15 04:15:00.229128 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-15 04:15:00.229149 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:15:00.229169 | orchestrator | 2026-02-15 04:15:00.229189 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-02-15 04:15:00.229202 | orchestrator | Sunday 15 February 2026 04:14:57 +0000 (0:00:01.019) 0:05:35.675 ******* 2026-02-15 04:15:00.229237 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:15:00.229254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:15:00.229283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-02-15 04:15:00.229298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:15:00.229312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:15:00.229335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-02-15 04:15:51.750995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:15:51.751099 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:15:51.751107 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-02-15 04:15:51.751124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:15:51.751130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:15:51.751136 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:15:51.751154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:15:51.751160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:15:51.751171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-02-15 04:15:51.751177 | orchestrator | 2026-02-15 04:15:51.751184 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-02-15 04:15:51.751191 | orchestrator | Sunday 15 February 2026 04:15:00 +0000 (0:00:02.886) 0:05:38.561 ******* 2026-02-15 04:15:51.751196 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:15:51.751203 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:15:51.751211 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:15:51.751216 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:15:51.751221 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:15:51.751226 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:15:51.751232 | orchestrator | 2026-02-15 04:15:51.751237 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-15 04:15:51.751242 | orchestrator | Sunday 15 February 2026 04:15:01 +0000 (0:00:00.901) 0:05:39.462 ******* 2026-02-15 04:15:51.751248 | orchestrator | 2026-02-15 04:15:51.751253 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-15 04:15:51.751259 | orchestrator | Sunday 15 February 2026 04:15:01 +0000 (0:00:00.162) 0:05:39.625 ******* 2026-02-15 04:15:51.751264 | orchestrator | 2026-02-15 04:15:51.751269 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-15 04:15:51.751274 | orchestrator | Sunday 15 February 2026 04:15:01 +0000 (0:00:00.166) 0:05:39.792 ******* 2026-02-15 04:15:51.751279 | orchestrator | 2026-02-15 04:15:51.751284 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-15 04:15:51.751289 | orchestrator | Sunday 15 February 2026 04:15:02 +0000 (0:00:00.152) 0:05:39.945 ******* 2026-02-15 04:15:51.751294 | orchestrator | 2026-02-15 04:15:51.751300 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-15 04:15:51.751305 | orchestrator | Sunday 15 February 2026 04:15:02 +0000 (0:00:00.158) 0:05:40.103 ******* 2026-02-15 04:15:51.751310 | orchestrator | 2026-02-15 04:15:51.751315 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-02-15 04:15:51.751320 | orchestrator | Sunday 15 February 2026 04:15:02 +0000 (0:00:00.362) 0:05:40.466 ******* 2026-02-15 04:15:51.751325 | orchestrator | 2026-02-15 04:15:51.751330 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-02-15 04:15:51.751335 | orchestrator | Sunday 15 February 2026 04:15:02 +0000 (0:00:00.160) 0:05:40.626 ******* 2026-02-15 04:15:51.751341 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:15:51.751346 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:15:51.751355 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:15:51.751360 | orchestrator | 2026-02-15 04:15:51.751365 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-02-15 04:15:51.751370 | orchestrator | Sunday 15 February 2026 04:15:10 +0000 (0:00:07.704) 0:05:48.330 ******* 2026-02-15 04:15:51.751375 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:15:51.751380 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:15:51.751386 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:15:51.751391 | orchestrator | 2026-02-15 04:15:51.751396 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-02-15 04:15:51.751401 | orchestrator | Sunday 15 February 2026 04:15:29 +0000 (0:00:19.160) 0:06:07.491 ******* 2026-02-15 04:15:51.751406 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:15:51.751411 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:15:51.751416 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:15:51.751421 | orchestrator | 2026-02-15 04:15:51.751427 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-02-15 04:18:17.661012 | orchestrator | Sunday 15 February 2026 04:15:51 +0000 (0:00:22.161) 0:06:29.652 ******* 2026-02-15 04:18:17.661160 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:18:17.661174 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:18:17.661184 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:18:17.661193 | orchestrator | 2026-02-15 04:18:17.661204 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-02-15 04:18:17.661214 | orchestrator | Sunday 15 February 2026 04:16:35 +0000 (0:00:44.146) 0:07:13.798 ******* 2026-02-15 04:18:17.661223 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:18:17.661232 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:18:17.661241 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:18:17.661250 | orchestrator | 2026-02-15 04:18:17.661259 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-02-15 04:18:17.661268 | orchestrator | Sunday 15 February 2026 04:16:36 +0000 (0:00:00.801) 0:07:14.600 ******* 2026-02-15 04:18:17.661276 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:18:17.661285 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:18:17.661294 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:18:17.661302 | orchestrator | 2026-02-15 04:18:17.661312 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-02-15 04:18:17.661321 | orchestrator | Sunday 15 February 2026 04:16:37 +0000 (0:00:00.803) 0:07:15.403 ******* 2026-02-15 04:18:17.661329 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:18:17.661338 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:18:17.661347 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:18:17.661356 | orchestrator | 2026-02-15 04:18:17.661365 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-02-15 04:18:17.661374 | orchestrator | Sunday 15 February 2026 04:17:08 +0000 (0:00:30.628) 0:07:46.032 ******* 2026-02-15 04:18:17.661383 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:18:17.661391 | orchestrator | 2026-02-15 04:18:17.661400 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-02-15 04:18:17.661409 | orchestrator | Sunday 15 February 2026 04:17:08 +0000 (0:00:00.123) 0:07:46.156 ******* 2026-02-15 04:18:17.661418 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:18:17.661426 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:17.661435 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:18:17.661444 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:17.661453 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:17.661462 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-02-15 04:18:17.661473 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 04:18:17.661482 | orchestrator | 2026-02-15 04:18:17.661491 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-02-15 04:18:17.661521 | orchestrator | Sunday 15 February 2026 04:17:29 +0000 (0:00:21.408) 0:08:07.564 ******* 2026-02-15 04:18:17.661530 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:18:17.661539 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:17.661547 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:18:17.661569 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:18:17.661579 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:17.661590 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:17.661601 | orchestrator | 2026-02-15 04:18:17.661611 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-02-15 04:18:17.661622 | orchestrator | Sunday 15 February 2026 04:17:38 +0000 (0:00:09.018) 0:08:16.583 ******* 2026-02-15 04:18:17.661631 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:18:17.661640 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:18:17.661649 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:17.661658 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:17.661667 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:17.661675 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-02-15 04:18:17.661684 | orchestrator | 2026-02-15 04:18:17.661693 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-02-15 04:18:17.661702 | orchestrator | Sunday 15 February 2026 04:17:42 +0000 (0:00:04.173) 0:08:20.756 ******* 2026-02-15 04:18:17.661710 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 04:18:17.661719 | orchestrator | 2026-02-15 04:18:17.661728 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-02-15 04:18:17.661737 | orchestrator | Sunday 15 February 2026 04:17:57 +0000 (0:00:14.202) 0:08:34.959 ******* 2026-02-15 04:18:17.661745 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 04:18:17.661756 | orchestrator | 2026-02-15 04:18:17.661771 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-02-15 04:18:17.661785 | orchestrator | Sunday 15 February 2026 04:17:58 +0000 (0:00:01.525) 0:08:36.485 ******* 2026-02-15 04:18:17.661800 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:18:17.661814 | orchestrator | 2026-02-15 04:18:17.661829 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-02-15 04:18:17.661843 | orchestrator | Sunday 15 February 2026 04:18:00 +0000 (0:00:01.725) 0:08:38.211 ******* 2026-02-15 04:18:17.661859 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 04:18:17.661878 | orchestrator | 2026-02-15 04:18:17.661896 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-02-15 04:18:17.661913 | orchestrator | Sunday 15 February 2026 04:18:12 +0000 (0:00:11.909) 0:08:50.120 ******* 2026-02-15 04:18:17.661931 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:18:17.661949 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:18:17.661966 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:18:17.661986 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:17.662002 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:17.662117 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:17.662143 | orchestrator | 2026-02-15 04:18:17.662158 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-02-15 04:18:17.662170 | orchestrator | 2026-02-15 04:18:17.662181 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-02-15 04:18:17.662213 | orchestrator | Sunday 15 February 2026 04:18:13 +0000 (0:00:01.775) 0:08:51.896 ******* 2026-02-15 04:18:17.662225 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:18:17.662236 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:18:17.662247 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:18:17.662258 | orchestrator | 2026-02-15 04:18:17.662269 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-02-15 04:18:17.662284 | orchestrator | 2026-02-15 04:18:17.662302 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-02-15 04:18:17.662320 | orchestrator | Sunday 15 February 2026 04:18:14 +0000 (0:00:00.950) 0:08:52.846 ******* 2026-02-15 04:18:17.662355 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:17.662374 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:17.662393 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:17.662423 | orchestrator | 2026-02-15 04:18:17.662445 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-02-15 04:18:17.662456 | orchestrator | 2026-02-15 04:18:17.662467 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-02-15 04:18:17.662478 | orchestrator | Sunday 15 February 2026 04:18:15 +0000 (0:00:00.753) 0:08:53.600 ******* 2026-02-15 04:18:17.662488 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-02-15 04:18:17.662500 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-02-15 04:18:17.662511 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-02-15 04:18:17.662522 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-02-15 04:18:17.662533 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-02-15 04:18:17.662543 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-02-15 04:18:17.662554 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:18:17.662565 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-02-15 04:18:17.662576 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-02-15 04:18:17.662586 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-02-15 04:18:17.662597 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-02-15 04:18:17.662608 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-02-15 04:18:17.662618 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-02-15 04:18:17.662629 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:18:17.662640 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-02-15 04:18:17.662650 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-02-15 04:18:17.662661 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-02-15 04:18:17.662672 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-02-15 04:18:17.662683 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-02-15 04:18:17.662693 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-02-15 04:18:17.662704 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:18:17.662723 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-02-15 04:18:17.662734 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-02-15 04:18:17.662745 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-02-15 04:18:17.662755 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-02-15 04:18:17.662766 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-02-15 04:18:17.662777 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-02-15 04:18:17.662788 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:17.662798 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-02-15 04:18:17.662809 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-02-15 04:18:17.662820 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-02-15 04:18:17.662831 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-02-15 04:18:17.662841 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-02-15 04:18:17.662852 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-02-15 04:18:17.662863 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:17.662874 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-02-15 04:18:17.662885 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-02-15 04:18:17.662896 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-02-15 04:18:17.662915 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-02-15 04:18:17.662926 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-02-15 04:18:17.662937 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-02-15 04:18:17.662947 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:17.662958 | orchestrator | 2026-02-15 04:18:17.662969 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-02-15 04:18:17.662980 | orchestrator | 2026-02-15 04:18:17.662991 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-02-15 04:18:17.663002 | orchestrator | Sunday 15 February 2026 04:18:17 +0000 (0:00:01.383) 0:08:54.983 ******* 2026-02-15 04:18:17.663013 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-02-15 04:18:17.663048 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-02-15 04:18:17.663064 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:17.663075 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-02-15 04:18:17.663087 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-02-15 04:18:17.663097 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:17.663108 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-02-15 04:18:17.663119 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-02-15 04:18:17.663130 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:17.663140 | orchestrator | 2026-02-15 04:18:17.663161 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-02-15 04:18:19.404926 | orchestrator | 2026-02-15 04:18:19.405005 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-02-15 04:18:19.405013 | orchestrator | Sunday 15 February 2026 04:18:17 +0000 (0:00:00.585) 0:08:55.568 ******* 2026-02-15 04:18:19.405018 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:19.405057 | orchestrator | 2026-02-15 04:18:19.405064 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-02-15 04:18:19.405069 | orchestrator | 2026-02-15 04:18:19.405074 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-02-15 04:18:19.405079 | orchestrator | Sunday 15 February 2026 04:18:18 +0000 (0:00:00.887) 0:08:56.455 ******* 2026-02-15 04:18:19.405085 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:19.405090 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:19.405095 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:19.405100 | orchestrator | 2026-02-15 04:18:19.405105 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:18:19.405110 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:18:19.405118 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2026-02-15 04:18:19.405123 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-15 04:18:19.405128 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2026-02-15 04:18:19.405133 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-02-15 04:18:19.405138 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-15 04:18:19.405143 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-15 04:18:19.405148 | orchestrator | 2026-02-15 04:18:19.405172 | orchestrator | 2026-02-15 04:18:19.405177 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:18:19.405182 | orchestrator | Sunday 15 February 2026 04:18:19 +0000 (0:00:00.472) 0:08:56.928 ******* 2026-02-15 04:18:19.405187 | orchestrator | =============================================================================== 2026-02-15 04:18:19.405203 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 44.15s 2026-02-15 04:18:19.405209 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 34.25s 2026-02-15 04:18:19.405214 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 30.63s 2026-02-15 04:18:19.405219 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 22.32s 2026-02-15 04:18:19.405224 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 22.16s 2026-02-15 04:18:19.405229 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.41s 2026-02-15 04:18:19.405233 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.16s 2026-02-15 04:18:19.405238 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 18.74s 2026-02-15 04:18:19.405243 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.57s 2026-02-15 04:18:19.405248 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 15.64s 2026-02-15 04:18:19.405253 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.20s 2026-02-15 04:18:19.405258 | orchestrator | nova-cell : Create cell ------------------------------------------------ 13.22s 2026-02-15 04:18:19.405262 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.85s 2026-02-15 04:18:19.405267 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.46s 2026-02-15 04:18:19.405272 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.91s 2026-02-15 04:18:19.405277 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.45s 2026-02-15 04:18:19.405282 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.02s 2026-02-15 04:18:19.405286 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.50s 2026-02-15 04:18:19.405291 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.84s 2026-02-15 04:18:19.405296 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.70s 2026-02-15 04:18:21.798323 | orchestrator | 2026-02-15 04:18:21 | INFO  | Task b7600d13-4171-4171-93f1-659a5fa85f7f (horizon) was prepared for execution. 2026-02-15 04:18:21.798431 | orchestrator | 2026-02-15 04:18:21 | INFO  | It takes a moment until task b7600d13-4171-4171-93f1-659a5fa85f7f (horizon) has been started and output is visible here. 2026-02-15 04:18:28.965960 | orchestrator | 2026-02-15 04:18:28.966160 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:18:28.966176 | orchestrator | 2026-02-15 04:18:28.966186 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:18:28.966195 | orchestrator | Sunday 15 February 2026 04:18:25 +0000 (0:00:00.257) 0:00:00.257 ******* 2026-02-15 04:18:28.966203 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:28.966213 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:28.966222 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:28.966230 | orchestrator | 2026-02-15 04:18:28.966239 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:18:28.966247 | orchestrator | Sunday 15 February 2026 04:18:26 +0000 (0:00:00.299) 0:00:00.556 ******* 2026-02-15 04:18:28.966256 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-02-15 04:18:28.966264 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-02-15 04:18:28.966273 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-02-15 04:18:28.966281 | orchestrator | 2026-02-15 04:18:28.966289 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-02-15 04:18:28.966318 | orchestrator | 2026-02-15 04:18:28.966327 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-15 04:18:28.966335 | orchestrator | Sunday 15 February 2026 04:18:26 +0000 (0:00:00.456) 0:00:01.012 ******* 2026-02-15 04:18:28.966343 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:18:28.966351 | orchestrator | 2026-02-15 04:18:28.966359 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-02-15 04:18:28.966366 | orchestrator | Sunday 15 February 2026 04:18:27 +0000 (0:00:00.505) 0:00:01.518 ******* 2026-02-15 04:18:28.966395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 04:18:28.966429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 04:18:28.966451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 04:18:28.966461 | orchestrator | 2026-02-15 04:18:28.966469 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-02-15 04:18:28.966477 | orchestrator | Sunday 15 February 2026 04:18:28 +0000 (0:00:01.145) 0:00:02.664 ******* 2026-02-15 04:18:28.966485 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:28.966493 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:28.966501 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:28.966509 | orchestrator | 2026-02-15 04:18:28.966517 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-15 04:18:28.966526 | orchestrator | Sunday 15 February 2026 04:18:28 +0000 (0:00:00.474) 0:00:03.138 ******* 2026-02-15 04:18:28.966540 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-15 04:18:35.048847 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-15 04:18:35.049080 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-02-15 04:18:35.049113 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-02-15 04:18:35.049217 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-02-15 04:18:35.049238 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-02-15 04:18:35.049257 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-02-15 04:18:35.049277 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-02-15 04:18:35.049296 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-15 04:18:35.049316 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-15 04:18:35.049336 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-02-15 04:18:35.049354 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-02-15 04:18:35.049373 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-02-15 04:18:35.049393 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-02-15 04:18:35.049413 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-02-15 04:18:35.049432 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-02-15 04:18:35.049451 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-02-15 04:18:35.049493 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-02-15 04:18:35.049513 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-02-15 04:18:35.049532 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-02-15 04:18:35.049551 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-02-15 04:18:35.049570 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-02-15 04:18:35.049589 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-02-15 04:18:35.049608 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-02-15 04:18:35.049646 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-02-15 04:18:35.049688 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-02-15 04:18:35.049707 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-02-15 04:18:35.049727 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-02-15 04:18:35.049747 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-02-15 04:18:35.049767 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-02-15 04:18:35.049786 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-02-15 04:18:35.049804 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-02-15 04:18:35.049837 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-02-15 04:18:35.049858 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-02-15 04:18:35.049878 | orchestrator | 2026-02-15 04:18:35.049899 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-15 04:18:35.049919 | orchestrator | Sunday 15 February 2026 04:18:29 +0000 (0:00:00.753) 0:00:03.891 ******* 2026-02-15 04:18:35.049937 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:35.049956 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:35.049974 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:35.049993 | orchestrator | 2026-02-15 04:18:35.050012 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-15 04:18:35.050123 | orchestrator | Sunday 15 February 2026 04:18:29 +0000 (0:00:00.333) 0:00:04.225 ******* 2026-02-15 04:18:35.050136 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.050148 | orchestrator | 2026-02-15 04:18:35.050181 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-15 04:18:35.050212 | orchestrator | Sunday 15 February 2026 04:18:30 +0000 (0:00:00.316) 0:00:04.542 ******* 2026-02-15 04:18:35.050230 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.050248 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:35.050267 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:35.050285 | orchestrator | 2026-02-15 04:18:35.050304 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-15 04:18:35.050322 | orchestrator | Sunday 15 February 2026 04:18:30 +0000 (0:00:00.304) 0:00:04.847 ******* 2026-02-15 04:18:35.050343 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:35.050354 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:35.050365 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:35.050376 | orchestrator | 2026-02-15 04:18:35.050387 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-15 04:18:35.050407 | orchestrator | Sunday 15 February 2026 04:18:30 +0000 (0:00:00.345) 0:00:05.193 ******* 2026-02-15 04:18:35.050425 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.050444 | orchestrator | 2026-02-15 04:18:35.050463 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-15 04:18:35.050482 | orchestrator | Sunday 15 February 2026 04:18:31 +0000 (0:00:00.135) 0:00:05.329 ******* 2026-02-15 04:18:35.050503 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.050515 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:35.050534 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:35.050552 | orchestrator | 2026-02-15 04:18:35.050571 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-15 04:18:35.050590 | orchestrator | Sunday 15 February 2026 04:18:31 +0000 (0:00:00.342) 0:00:05.672 ******* 2026-02-15 04:18:35.050607 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:35.050626 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:35.050645 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:35.050663 | orchestrator | 2026-02-15 04:18:35.050682 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-15 04:18:35.050701 | orchestrator | Sunday 15 February 2026 04:18:31 +0000 (0:00:00.523) 0:00:06.195 ******* 2026-02-15 04:18:35.050720 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.050738 | orchestrator | 2026-02-15 04:18:35.050758 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-15 04:18:35.050776 | orchestrator | Sunday 15 February 2026 04:18:32 +0000 (0:00:00.128) 0:00:06.324 ******* 2026-02-15 04:18:35.050794 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.050813 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:35.050832 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:35.050850 | orchestrator | 2026-02-15 04:18:35.050870 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-15 04:18:35.050903 | orchestrator | Sunday 15 February 2026 04:18:32 +0000 (0:00:00.322) 0:00:06.646 ******* 2026-02-15 04:18:35.050922 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:35.050941 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:35.050959 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:35.050977 | orchestrator | 2026-02-15 04:18:35.050995 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-15 04:18:35.051013 | orchestrator | Sunday 15 February 2026 04:18:32 +0000 (0:00:00.317) 0:00:06.964 ******* 2026-02-15 04:18:35.051065 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.051087 | orchestrator | 2026-02-15 04:18:35.051106 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-15 04:18:35.051126 | orchestrator | Sunday 15 February 2026 04:18:32 +0000 (0:00:00.133) 0:00:07.097 ******* 2026-02-15 04:18:35.051145 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.051165 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:35.051184 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:35.051202 | orchestrator | 2026-02-15 04:18:35.051222 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-15 04:18:35.051242 | orchestrator | Sunday 15 February 2026 04:18:33 +0000 (0:00:00.508) 0:00:07.605 ******* 2026-02-15 04:18:35.051262 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:35.051281 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:35.051300 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:35.051320 | orchestrator | 2026-02-15 04:18:35.051340 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-15 04:18:35.051360 | orchestrator | Sunday 15 February 2026 04:18:33 +0000 (0:00:00.322) 0:00:07.928 ******* 2026-02-15 04:18:35.051379 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.051397 | orchestrator | 2026-02-15 04:18:35.051416 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-15 04:18:35.051436 | orchestrator | Sunday 15 February 2026 04:18:33 +0000 (0:00:00.133) 0:00:08.061 ******* 2026-02-15 04:18:35.051456 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.051475 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:35.051494 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:35.051512 | orchestrator | 2026-02-15 04:18:35.051531 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-15 04:18:35.051551 | orchestrator | Sunday 15 February 2026 04:18:34 +0000 (0:00:00.310) 0:00:08.372 ******* 2026-02-15 04:18:35.051570 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:35.051588 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:35.051599 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:35.051610 | orchestrator | 2026-02-15 04:18:35.051621 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-15 04:18:35.051632 | orchestrator | Sunday 15 February 2026 04:18:34 +0000 (0:00:00.305) 0:00:08.678 ******* 2026-02-15 04:18:35.051643 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.051653 | orchestrator | 2026-02-15 04:18:35.051664 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-15 04:18:35.051675 | orchestrator | Sunday 15 February 2026 04:18:34 +0000 (0:00:00.313) 0:00:08.991 ******* 2026-02-15 04:18:35.051686 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:35.051696 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:35.051707 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:35.051718 | orchestrator | 2026-02-15 04:18:35.051729 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-15 04:18:35.051753 | orchestrator | Sunday 15 February 2026 04:18:35 +0000 (0:00:00.327) 0:00:09.318 ******* 2026-02-15 04:18:49.250885 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:49.251031 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:49.251092 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:49.251114 | orchestrator | 2026-02-15 04:18:49.251136 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-15 04:18:49.251158 | orchestrator | Sunday 15 February 2026 04:18:35 +0000 (0:00:00.327) 0:00:09.646 ******* 2026-02-15 04:18:49.251209 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.251222 | orchestrator | 2026-02-15 04:18:49.251234 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-15 04:18:49.251245 | orchestrator | Sunday 15 February 2026 04:18:35 +0000 (0:00:00.160) 0:00:09.806 ******* 2026-02-15 04:18:49.251257 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.251269 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:49.251280 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:49.251291 | orchestrator | 2026-02-15 04:18:49.251302 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-15 04:18:49.251313 | orchestrator | Sunday 15 February 2026 04:18:35 +0000 (0:00:00.290) 0:00:10.096 ******* 2026-02-15 04:18:49.251324 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:49.251335 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:49.251346 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:49.251357 | orchestrator | 2026-02-15 04:18:49.251368 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-15 04:18:49.251379 | orchestrator | Sunday 15 February 2026 04:18:36 +0000 (0:00:00.529) 0:00:10.626 ******* 2026-02-15 04:18:49.251390 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.251400 | orchestrator | 2026-02-15 04:18:49.251412 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-15 04:18:49.251425 | orchestrator | Sunday 15 February 2026 04:18:36 +0000 (0:00:00.136) 0:00:10.762 ******* 2026-02-15 04:18:49.251437 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.251449 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:49.251462 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:49.251474 | orchestrator | 2026-02-15 04:18:49.251487 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-15 04:18:49.251500 | orchestrator | Sunday 15 February 2026 04:18:36 +0000 (0:00:00.327) 0:00:11.090 ******* 2026-02-15 04:18:49.251517 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:49.251536 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:49.251555 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:49.251572 | orchestrator | 2026-02-15 04:18:49.251598 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-15 04:18:49.251622 | orchestrator | Sunday 15 February 2026 04:18:37 +0000 (0:00:00.337) 0:00:11.427 ******* 2026-02-15 04:18:49.251640 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.251661 | orchestrator | 2026-02-15 04:18:49.251681 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-15 04:18:49.251700 | orchestrator | Sunday 15 February 2026 04:18:37 +0000 (0:00:00.126) 0:00:11.554 ******* 2026-02-15 04:18:49.251717 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.251729 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:49.251741 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:49.251754 | orchestrator | 2026-02-15 04:18:49.251767 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-02-15 04:18:49.251793 | orchestrator | Sunday 15 February 2026 04:18:37 +0000 (0:00:00.532) 0:00:12.086 ******* 2026-02-15 04:18:49.251804 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:18:49.251815 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:18:49.251826 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:18:49.251837 | orchestrator | 2026-02-15 04:18:49.251848 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-02-15 04:18:49.251859 | orchestrator | Sunday 15 February 2026 04:18:38 +0000 (0:00:00.347) 0:00:12.434 ******* 2026-02-15 04:18:49.251870 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.251881 | orchestrator | 2026-02-15 04:18:49.251892 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-02-15 04:18:49.251903 | orchestrator | Sunday 15 February 2026 04:18:38 +0000 (0:00:00.158) 0:00:12.593 ******* 2026-02-15 04:18:49.251914 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.251925 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:49.251947 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:49.251959 | orchestrator | 2026-02-15 04:18:49.251970 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-02-15 04:18:49.251981 | orchestrator | Sunday 15 February 2026 04:18:38 +0000 (0:00:00.335) 0:00:12.928 ******* 2026-02-15 04:18:49.251992 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:18:49.252003 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:18:49.252014 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:18:49.252025 | orchestrator | 2026-02-15 04:18:49.252108 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-02-15 04:18:49.252132 | orchestrator | Sunday 15 February 2026 04:18:40 +0000 (0:00:01.891) 0:00:14.820 ******* 2026-02-15 04:18:49.252150 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-15 04:18:49.252166 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-15 04:18:49.252177 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-02-15 04:18:49.252188 | orchestrator | 2026-02-15 04:18:49.252199 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-02-15 04:18:49.252210 | orchestrator | Sunday 15 February 2026 04:18:42 +0000 (0:00:01.947) 0:00:16.767 ******* 2026-02-15 04:18:49.252220 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-15 04:18:49.252232 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-15 04:18:49.252243 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-02-15 04:18:49.252254 | orchestrator | 2026-02-15 04:18:49.252265 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-02-15 04:18:49.252298 | orchestrator | Sunday 15 February 2026 04:18:44 +0000 (0:00:01.851) 0:00:18.618 ******* 2026-02-15 04:18:49.252310 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-15 04:18:49.252321 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-15 04:18:49.252332 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-02-15 04:18:49.252343 | orchestrator | 2026-02-15 04:18:49.252353 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-02-15 04:18:49.252365 | orchestrator | Sunday 15 February 2026 04:18:45 +0000 (0:00:01.507) 0:00:20.125 ******* 2026-02-15 04:18:49.252376 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.252386 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:49.252397 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:49.252408 | orchestrator | 2026-02-15 04:18:49.252419 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-02-15 04:18:49.252429 | orchestrator | Sunday 15 February 2026 04:18:46 +0000 (0:00:00.513) 0:00:20.639 ******* 2026-02-15 04:18:49.252440 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.252451 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:49.252462 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:49.252473 | orchestrator | 2026-02-15 04:18:49.252484 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-15 04:18:49.252494 | orchestrator | Sunday 15 February 2026 04:18:46 +0000 (0:00:00.305) 0:00:20.944 ******* 2026-02-15 04:18:49.252505 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:18:49.252518 | orchestrator | 2026-02-15 04:18:49.252537 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-02-15 04:18:49.252555 | orchestrator | Sunday 15 February 2026 04:18:47 +0000 (0:00:00.620) 0:00:21.565 ******* 2026-02-15 04:18:49.252593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 04:18:49.252657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 04:18:49.895823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 04:18:49.895944 | orchestrator | 2026-02-15 04:18:49.895970 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-02-15 04:18:49.895991 | orchestrator | Sunday 15 February 2026 04:18:49 +0000 (0:00:01.951) 0:00:23.516 ******* 2026-02-15 04:18:49.896116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 04:18:49.896161 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:49.896185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 04:18:49.896198 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:49.896227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 04:18:52.440832 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:52.440964 | orchestrator | 2026-02-15 04:18:52.440995 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-02-15 04:18:52.441034 | orchestrator | Sunday 15 February 2026 04:18:49 +0000 (0:00:00.649) 0:00:24.165 ******* 2026-02-15 04:18:52.441100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 04:18:52.441125 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:18:52.441169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 04:18:52.441222 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:18:52.441244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 04:18:52.441274 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:18:52.441285 | orchestrator | 2026-02-15 04:18:52.441297 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-02-15 04:18:52.441308 | orchestrator | Sunday 15 February 2026 04:18:50 +0000 (0:00:00.895) 0:00:25.061 ******* 2026-02-15 04:18:52.441390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 04:19:41.569490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 04:19:41.569672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.2.20251130', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 04:19:41.569695 | orchestrator | 2026-02-15 04:19:41.569711 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-15 04:19:41.569724 | orchestrator | Sunday 15 February 2026 04:18:52 +0000 (0:00:01.651) 0:00:26.713 ******* 2026-02-15 04:19:41.569736 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:19:41.569748 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:19:41.569759 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:19:41.569770 | orchestrator | 2026-02-15 04:19:41.569782 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-02-15 04:19:41.569793 | orchestrator | Sunday 15 February 2026 04:18:52 +0000 (0:00:00.312) 0:00:27.026 ******* 2026-02-15 04:19:41.569805 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:19:41.569816 | orchestrator | 2026-02-15 04:19:41.569828 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-02-15 04:19:41.569839 | orchestrator | Sunday 15 February 2026 04:18:53 +0000 (0:00:00.572) 0:00:27.598 ******* 2026-02-15 04:19:41.569851 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:19:41.569870 | orchestrator | 2026-02-15 04:19:41.569881 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-02-15 04:19:41.569893 | orchestrator | Sunday 15 February 2026 04:18:55 +0000 (0:00:02.384) 0:00:29.982 ******* 2026-02-15 04:19:41.569905 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:19:41.569916 | orchestrator | 2026-02-15 04:19:41.569928 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-02-15 04:19:41.569939 | orchestrator | Sunday 15 February 2026 04:18:58 +0000 (0:00:02.918) 0:00:32.901 ******* 2026-02-15 04:19:41.569951 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:19:41.569962 | orchestrator | 2026-02-15 04:19:41.569973 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-15 04:19:41.569985 | orchestrator | Sunday 15 February 2026 04:19:15 +0000 (0:00:16.698) 0:00:49.599 ******* 2026-02-15 04:19:41.569996 | orchestrator | 2026-02-15 04:19:41.570007 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-15 04:19:41.570131 | orchestrator | Sunday 15 February 2026 04:19:15 +0000 (0:00:00.070) 0:00:49.670 ******* 2026-02-15 04:19:41.570147 | orchestrator | 2026-02-15 04:19:41.570159 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-02-15 04:19:41.570171 | orchestrator | Sunday 15 February 2026 04:19:15 +0000 (0:00:00.065) 0:00:49.735 ******* 2026-02-15 04:19:41.570308 | orchestrator | 2026-02-15 04:19:41.570322 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-02-15 04:19:41.570335 | orchestrator | Sunday 15 February 2026 04:19:15 +0000 (0:00:00.087) 0:00:49.822 ******* 2026-02-15 04:19:41.570346 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:19:41.570390 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:19:41.570401 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:19:41.570412 | orchestrator | 2026-02-15 04:19:41.570423 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:19:41.570435 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-15 04:19:41.570448 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-15 04:19:41.570459 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-02-15 04:19:41.570470 | orchestrator | 2026-02-15 04:19:41.570481 | orchestrator | 2026-02-15 04:19:41.570492 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:19:41.570510 | orchestrator | Sunday 15 February 2026 04:19:41 +0000 (0:00:26.003) 0:01:15.826 ******* 2026-02-15 04:19:41.570522 | orchestrator | =============================================================================== 2026-02-15 04:19:41.570532 | orchestrator | horizon : Restart horizon container ------------------------------------ 26.00s 2026-02-15 04:19:41.570543 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 16.70s 2026-02-15 04:19:41.570554 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.92s 2026-02-15 04:19:41.570565 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.38s 2026-02-15 04:19:41.570576 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.95s 2026-02-15 04:19:41.570586 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.95s 2026-02-15 04:19:41.570597 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.89s 2026-02-15 04:19:41.570608 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 1.85s 2026-02-15 04:19:41.570619 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.65s 2026-02-15 04:19:41.570629 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.51s 2026-02-15 04:19:41.570648 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.15s 2026-02-15 04:19:41.570678 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.90s 2026-02-15 04:19:41.570739 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2026-02-15 04:19:41.570781 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2026-02-15 04:19:41.965017 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2026-02-15 04:19:41.965194 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.57s 2026-02-15 04:19:41.965211 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.53s 2026-02-15 04:19:41.965224 | orchestrator | horizon : Update policy file name --------------------------------------- 0.53s 2026-02-15 04:19:41.965235 | orchestrator | horizon : Update policy file name --------------------------------------- 0.52s 2026-02-15 04:19:41.965246 | orchestrator | horizon : Copying over existing policy file ----------------------------- 0.51s 2026-02-15 04:19:44.283126 | orchestrator | 2026-02-15 04:19:44 | INFO  | Task b1546e1c-ec8b-4f4c-a388-76e2dd6fa854 (skyline) was prepared for execution. 2026-02-15 04:19:44.283222 | orchestrator | 2026-02-15 04:19:44 | INFO  | It takes a moment until task b1546e1c-ec8b-4f4c-a388-76e2dd6fa854 (skyline) has been started and output is visible here. 2026-02-15 04:20:16.393120 | orchestrator | 2026-02-15 04:20:16.393198 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:20:16.393205 | orchestrator | 2026-02-15 04:20:16.393209 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:20:16.393214 | orchestrator | Sunday 15 February 2026 04:19:48 +0000 (0:00:00.268) 0:00:00.268 ******* 2026-02-15 04:20:16.393218 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:20:16.393224 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:20:16.393228 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:20:16.393232 | orchestrator | 2026-02-15 04:20:16.393236 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:20:16.393240 | orchestrator | Sunday 15 February 2026 04:19:48 +0000 (0:00:00.320) 0:00:00.589 ******* 2026-02-15 04:20:16.393244 | orchestrator | ok: [testbed-node-0] => (item=enable_skyline_True) 2026-02-15 04:20:16.393248 | orchestrator | ok: [testbed-node-1] => (item=enable_skyline_True) 2026-02-15 04:20:16.393252 | orchestrator | ok: [testbed-node-2] => (item=enable_skyline_True) 2026-02-15 04:20:16.393256 | orchestrator | 2026-02-15 04:20:16.393259 | orchestrator | PLAY [Apply role skyline] ****************************************************** 2026-02-15 04:20:16.393263 | orchestrator | 2026-02-15 04:20:16.393267 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-15 04:20:16.393271 | orchestrator | Sunday 15 February 2026 04:19:49 +0000 (0:00:00.469) 0:00:01.058 ******* 2026-02-15 04:20:16.393275 | orchestrator | included: /ansible/roles/skyline/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:20:16.393280 | orchestrator | 2026-02-15 04:20:16.393283 | orchestrator | TASK [service-ks-register : skyline | Creating services] *********************** 2026-02-15 04:20:16.393287 | orchestrator | Sunday 15 February 2026 04:19:49 +0000 (0:00:00.542) 0:00:01.601 ******* 2026-02-15 04:20:16.393291 | orchestrator | changed: [testbed-node-0] => (item=skyline (panel)) 2026-02-15 04:20:16.393295 | orchestrator | 2026-02-15 04:20:16.393299 | orchestrator | TASK [service-ks-register : skyline | Creating endpoints] ********************** 2026-02-15 04:20:16.393303 | orchestrator | Sunday 15 February 2026 04:19:53 +0000 (0:00:03.495) 0:00:05.097 ******* 2026-02-15 04:20:16.393307 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api-int.testbed.osism.xyz:9998 -> internal) 2026-02-15 04:20:16.393311 | orchestrator | changed: [testbed-node-0] => (item=skyline -> https://api.testbed.osism.xyz:9998 -> public) 2026-02-15 04:20:16.393315 | orchestrator | 2026-02-15 04:20:16.393319 | orchestrator | TASK [service-ks-register : skyline | Creating projects] *********************** 2026-02-15 04:20:16.393323 | orchestrator | Sunday 15 February 2026 04:20:00 +0000 (0:00:07.118) 0:00:12.215 ******* 2026-02-15 04:20:16.393346 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:20:16.393351 | orchestrator | 2026-02-15 04:20:16.393355 | orchestrator | TASK [service-ks-register : skyline | Creating users] ************************** 2026-02-15 04:20:16.393358 | orchestrator | Sunday 15 February 2026 04:20:03 +0000 (0:00:03.392) 0:00:15.608 ******* 2026-02-15 04:20:16.393371 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:20:16.393375 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service) 2026-02-15 04:20:16.393379 | orchestrator | 2026-02-15 04:20:16.393383 | orchestrator | TASK [service-ks-register : skyline | Creating roles] ************************** 2026-02-15 04:20:16.393387 | orchestrator | Sunday 15 February 2026 04:20:07 +0000 (0:00:04.112) 0:00:19.720 ******* 2026-02-15 04:20:16.393391 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:20:16.393395 | orchestrator | 2026-02-15 04:20:16.393399 | orchestrator | TASK [service-ks-register : skyline | Granting user roles] ********************* 2026-02-15 04:20:16.393403 | orchestrator | Sunday 15 February 2026 04:20:11 +0000 (0:00:03.261) 0:00:22.982 ******* 2026-02-15 04:20:16.393407 | orchestrator | changed: [testbed-node-0] => (item=skyline -> service -> admin) 2026-02-15 04:20:16.393411 | orchestrator | 2026-02-15 04:20:16.393414 | orchestrator | TASK [skyline : Ensuring config directories exist] ***************************** 2026-02-15 04:20:16.393418 | orchestrator | Sunday 15 February 2026 04:20:15 +0000 (0:00:03.915) 0:00:26.897 ******* 2026-02-15 04:20:16.393425 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:16.393442 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:16.393447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:16.393461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:16.393467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:16.393475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:20.245824 | orchestrator | 2026-02-15 04:20:20.245929 | orchestrator | TASK [skyline : include_tasks] ************************************************* 2026-02-15 04:20:20.245946 | orchestrator | Sunday 15 February 2026 04:20:16 +0000 (0:00:01.253) 0:00:28.151 ******* 2026-02-15 04:20:20.245958 | orchestrator | included: /ansible/roles/skyline/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:20:20.245970 | orchestrator | 2026-02-15 04:20:20.245981 | orchestrator | TASK [service-cert-copy : skyline | Copying over extra CA certificates] ******** 2026-02-15 04:20:20.245992 | orchestrator | Sunday 15 February 2026 04:20:17 +0000 (0:00:00.739) 0:00:28.890 ******* 2026-02-15 04:20:20.246006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:20.246221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:20.246241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:20.246273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:20.246288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:20.246314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:20.246327 | orchestrator | 2026-02-15 04:20:20.246339 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS certificate] *** 2026-02-15 04:20:20.246352 | orchestrator | Sunday 15 February 2026 04:20:19 +0000 (0:00:02.491) 0:00:31.382 ******* 2026-02-15 04:20:20.246366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-15 04:20:20.246379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-15 04:20:20.246393 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:20:20.246416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-15 04:20:21.591496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-15 04:20:21.591609 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:20:21.591629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-15 04:20:21.591644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-15 04:20:21.591656 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:20:21.591667 | orchestrator | 2026-02-15 04:20:21.591680 | orchestrator | TASK [service-cert-copy : skyline | Copying over backend internal TLS key] ***** 2026-02-15 04:20:21.591692 | orchestrator | Sunday 15 February 2026 04:20:20 +0000 (0:00:00.628) 0:00:32.010 ******* 2026-02-15 04:20:21.591704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-15 04:20:21.591754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-15 04:20:21.591767 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:20:21.591784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-15 04:20:21.591796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-15 04:20:21.591808 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:20:21.591819 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-02-15 04:20:21.591847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-02-15 04:20:30.095817 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:20:30.095928 | orchestrator | 2026-02-15 04:20:30.095945 | orchestrator | TASK [skyline : Copying over skyline.yaml files for services] ****************** 2026-02-15 04:20:30.095959 | orchestrator | Sunday 15 February 2026 04:20:21 +0000 (0:00:01.337) 0:00:33.348 ******* 2026-02-15 04:20:30.095990 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:30.096013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:30.096033 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:30.096080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:30.096193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:30.096214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:30.096232 | orchestrator | 2026-02-15 04:20:30.096251 | orchestrator | TASK [skyline : Copying over gunicorn.py files for services] ******************* 2026-02-15 04:20:30.096272 | orchestrator | Sunday 15 February 2026 04:20:24 +0000 (0:00:02.461) 0:00:35.810 ******* 2026-02-15 04:20:30.096290 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-15 04:20:30.096310 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-15 04:20:30.096340 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/gunicorn.py.j2) 2026-02-15 04:20:30.096359 | orchestrator | 2026-02-15 04:20:30.096378 | orchestrator | TASK [skyline : Copying over nginx.conf files for services] ******************** 2026-02-15 04:20:30.096398 | orchestrator | Sunday 15 February 2026 04:20:25 +0000 (0:00:01.612) 0:00:37.422 ******* 2026-02-15 04:20:30.096417 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-15 04:20:30.096437 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-15 04:20:30.096455 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/skyline/templates/nginx.conf.j2) 2026-02-15 04:20:30.096475 | orchestrator | 2026-02-15 04:20:30.096494 | orchestrator | TASK [skyline : Copying over config.json files for services] ******************* 2026-02-15 04:20:30.096515 | orchestrator | Sunday 15 February 2026 04:20:27 +0000 (0:00:02.114) 0:00:39.537 ******* 2026-02-15 04:20:30.096538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:30.096581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:32.248876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:32.248974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:32.249013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:32.249024 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:32.249034 | orchestrator | 2026-02-15 04:20:32.249046 | orchestrator | TASK [skyline : Copying over custom logos] ************************************* 2026-02-15 04:20:32.249070 | orchestrator | Sunday 15 February 2026 04:20:30 +0000 (0:00:02.322) 0:00:41.859 ******* 2026-02-15 04:20:32.249080 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:20:32.249125 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:20:32.249135 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:20:32.249144 | orchestrator | 2026-02-15 04:20:32.249168 | orchestrator | TASK [skyline : Check skyline container] *************************************** 2026-02-15 04:20:32.249178 | orchestrator | Sunday 15 February 2026 04:20:30 +0000 (0:00:00.313) 0:00:42.172 ******* 2026-02-15 04:20:32.249188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:32.249206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:32.249216 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:32.249225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:20:32.249249 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:21:07.475582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-02-15 04:21:07.475690 | orchestrator | 2026-02-15 04:21:07.475705 | orchestrator | TASK [skyline : Creating Skyline database] ************************************* 2026-02-15 04:21:07.475717 | orchestrator | Sunday 15 February 2026 04:20:32 +0000 (0:00:01.838) 0:00:44.011 ******* 2026-02-15 04:21:07.475726 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:21:07.475737 | orchestrator | 2026-02-15 04:21:07.475746 | orchestrator | TASK [skyline : Creating Skyline database user and setting permissions] ******** 2026-02-15 04:21:07.475755 | orchestrator | Sunday 15 February 2026 04:20:34 +0000 (0:00:02.199) 0:00:46.211 ******* 2026-02-15 04:21:07.475764 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:21:07.475773 | orchestrator | 2026-02-15 04:21:07.475782 | orchestrator | TASK [skyline : Running Skyline bootstrap container] *************************** 2026-02-15 04:21:07.475791 | orchestrator | Sunday 15 February 2026 04:20:36 +0000 (0:00:02.331) 0:00:48.542 ******* 2026-02-15 04:21:07.475800 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:21:07.475809 | orchestrator | 2026-02-15 04:21:07.475818 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-15 04:21:07.475827 | orchestrator | Sunday 15 February 2026 04:20:44 +0000 (0:00:08.145) 0:00:56.688 ******* 2026-02-15 04:21:07.475836 | orchestrator | 2026-02-15 04:21:07.475845 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-15 04:21:07.475854 | orchestrator | Sunday 15 February 2026 04:20:44 +0000 (0:00:00.073) 0:00:56.762 ******* 2026-02-15 04:21:07.475863 | orchestrator | 2026-02-15 04:21:07.475872 | orchestrator | TASK [skyline : Flush handlers] ************************************************ 2026-02-15 04:21:07.475881 | orchestrator | Sunday 15 February 2026 04:20:45 +0000 (0:00:00.071) 0:00:56.833 ******* 2026-02-15 04:21:07.475889 | orchestrator | 2026-02-15 04:21:07.475898 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-apiserver container] **************** 2026-02-15 04:21:07.475907 | orchestrator | Sunday 15 February 2026 04:20:45 +0000 (0:00:00.071) 0:00:56.905 ******* 2026-02-15 04:21:07.475916 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:21:07.475925 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:21:07.475934 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:21:07.475942 | orchestrator | 2026-02-15 04:21:07.475951 | orchestrator | RUNNING HANDLER [skyline : Restart skyline-console container] ****************** 2026-02-15 04:21:07.475960 | orchestrator | Sunday 15 February 2026 04:20:53 +0000 (0:00:08.053) 0:01:04.958 ******* 2026-02-15 04:21:07.475969 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:21:07.475978 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:21:07.475987 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:21:07.475996 | orchestrator | 2026-02-15 04:21:07.476005 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:21:07.476015 | orchestrator | testbed-node-0 : ok=22  changed=16  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 04:21:07.476026 | orchestrator | testbed-node-1 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 04:21:07.476060 | orchestrator | testbed-node-2 : ok=13  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 04:21:07.476070 | orchestrator | 2026-02-15 04:21:07.476079 | orchestrator | 2026-02-15 04:21:07.476101 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:21:07.476174 | orchestrator | Sunday 15 February 2026 04:21:07 +0000 (0:00:13.961) 0:01:18.920 ******* 2026-02-15 04:21:07.476186 | orchestrator | =============================================================================== 2026-02-15 04:21:07.476196 | orchestrator | skyline : Restart skyline-console container ---------------------------- 13.96s 2026-02-15 04:21:07.476207 | orchestrator | skyline : Running Skyline bootstrap container --------------------------- 8.15s 2026-02-15 04:21:07.476217 | orchestrator | skyline : Restart skyline-apiserver container --------------------------- 8.05s 2026-02-15 04:21:07.476227 | orchestrator | service-ks-register : skyline | Creating endpoints ---------------------- 7.12s 2026-02-15 04:21:07.476238 | orchestrator | service-ks-register : skyline | Creating users -------------------------- 4.11s 2026-02-15 04:21:07.476248 | orchestrator | service-ks-register : skyline | Granting user roles --------------------- 3.92s 2026-02-15 04:21:07.476258 | orchestrator | service-ks-register : skyline | Creating services ----------------------- 3.50s 2026-02-15 04:21:07.476267 | orchestrator | service-ks-register : skyline | Creating projects ----------------------- 3.39s 2026-02-15 04:21:07.476292 | orchestrator | service-ks-register : skyline | Creating roles -------------------------- 3.26s 2026-02-15 04:21:07.476301 | orchestrator | service-cert-copy : skyline | Copying over extra CA certificates -------- 2.49s 2026-02-15 04:21:07.476310 | orchestrator | skyline : Copying over skyline.yaml files for services ------------------ 2.46s 2026-02-15 04:21:07.476319 | orchestrator | skyline : Creating Skyline database user and setting permissions -------- 2.33s 2026-02-15 04:21:07.476328 | orchestrator | skyline : Copying over config.json files for services ------------------- 2.32s 2026-02-15 04:21:07.476336 | orchestrator | skyline : Creating Skyline database ------------------------------------- 2.20s 2026-02-15 04:21:07.476345 | orchestrator | skyline : Copying over nginx.conf files for services -------------------- 2.11s 2026-02-15 04:21:07.476354 | orchestrator | skyline : Check skyline container --------------------------------------- 1.84s 2026-02-15 04:21:07.476363 | orchestrator | skyline : Copying over gunicorn.py files for services ------------------- 1.61s 2026-02-15 04:21:07.476372 | orchestrator | service-cert-copy : skyline | Copying over backend internal TLS key ----- 1.34s 2026-02-15 04:21:07.476386 | orchestrator | skyline : Ensuring config directories exist ----------------------------- 1.25s 2026-02-15 04:21:07.476402 | orchestrator | skyline : include_tasks ------------------------------------------------- 0.74s 2026-02-15 04:21:09.900571 | orchestrator | 2026-02-15 04:21:09 | INFO  | Task 6cf65963-4199-4e84-b9d4-898ee5aac1e9 (glance) was prepared for execution. 2026-02-15 04:21:09.900676 | orchestrator | 2026-02-15 04:21:09 | INFO  | It takes a moment until task 6cf65963-4199-4e84-b9d4-898ee5aac1e9 (glance) has been started and output is visible here. 2026-02-15 04:21:42.535197 | orchestrator | 2026-02-15 04:21:42.535319 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:21:42.535336 | orchestrator | 2026-02-15 04:21:42.535349 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:21:42.535361 | orchestrator | Sunday 15 February 2026 04:21:14 +0000 (0:00:00.266) 0:00:00.266 ******* 2026-02-15 04:21:42.535373 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:21:42.535385 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:21:42.535396 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:21:42.535408 | orchestrator | 2026-02-15 04:21:42.535419 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:21:42.535430 | orchestrator | Sunday 15 February 2026 04:21:14 +0000 (0:00:00.312) 0:00:00.579 ******* 2026-02-15 04:21:42.535467 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-02-15 04:21:42.535480 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-02-15 04:21:42.535491 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-02-15 04:21:42.535502 | orchestrator | 2026-02-15 04:21:42.535513 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-02-15 04:21:42.535524 | orchestrator | 2026-02-15 04:21:42.535535 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-15 04:21:42.535546 | orchestrator | Sunday 15 February 2026 04:21:14 +0000 (0:00:00.471) 0:00:01.051 ******* 2026-02-15 04:21:42.535557 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:21:42.535569 | orchestrator | 2026-02-15 04:21:42.535580 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-02-15 04:21:42.535591 | orchestrator | Sunday 15 February 2026 04:21:15 +0000 (0:00:00.572) 0:00:01.623 ******* 2026-02-15 04:21:42.535602 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-02-15 04:21:42.535613 | orchestrator | 2026-02-15 04:21:42.535624 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-02-15 04:21:42.535635 | orchestrator | Sunday 15 February 2026 04:21:18 +0000 (0:00:03.437) 0:00:05.061 ******* 2026-02-15 04:21:42.535646 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-02-15 04:21:42.535657 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-02-15 04:21:42.535671 | orchestrator | 2026-02-15 04:21:42.535684 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-02-15 04:21:42.535697 | orchestrator | Sunday 15 February 2026 04:21:24 +0000 (0:00:06.016) 0:00:11.078 ******* 2026-02-15 04:21:42.535709 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:21:42.535723 | orchestrator | 2026-02-15 04:21:42.535736 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-02-15 04:21:42.535764 | orchestrator | Sunday 15 February 2026 04:21:27 +0000 (0:00:02.918) 0:00:13.996 ******* 2026-02-15 04:21:42.535777 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:21:42.535790 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-02-15 04:21:42.535803 | orchestrator | 2026-02-15 04:21:42.535815 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-02-15 04:21:42.535828 | orchestrator | Sunday 15 February 2026 04:21:31 +0000 (0:00:03.998) 0:00:17.994 ******* 2026-02-15 04:21:42.535840 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:21:42.535853 | orchestrator | 2026-02-15 04:21:42.535866 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-02-15 04:21:42.535878 | orchestrator | Sunday 15 February 2026 04:21:34 +0000 (0:00:02.784) 0:00:20.779 ******* 2026-02-15 04:21:42.535890 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-02-15 04:21:42.535903 | orchestrator | 2026-02-15 04:21:42.535916 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-02-15 04:21:42.535928 | orchestrator | Sunday 15 February 2026 04:21:38 +0000 (0:00:03.562) 0:00:24.342 ******* 2026-02-15 04:21:42.535964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:21:42.535999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:21:42.536015 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:21:42.536035 | orchestrator | 2026-02-15 04:21:42.536047 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-15 04:21:42.536058 | orchestrator | Sunday 15 February 2026 04:21:41 +0000 (0:00:03.521) 0:00:27.863 ******* 2026-02-15 04:21:42.536070 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:21:42.536081 | orchestrator | 2026-02-15 04:21:42.536099 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-02-15 04:21:57.844773 | orchestrator | Sunday 15 February 2026 04:21:42 +0000 (0:00:00.736) 0:00:28.599 ******* 2026-02-15 04:21:57.844885 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:21:57.844903 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:21:57.844915 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:21:57.844934 | orchestrator | 2026-02-15 04:21:57.844955 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-02-15 04:21:57.844975 | orchestrator | Sunday 15 February 2026 04:21:46 +0000 (0:00:03.554) 0:00:32.154 ******* 2026-02-15 04:21:57.844993 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-15 04:21:57.845012 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-15 04:21:57.845030 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-15 04:21:57.845049 | orchestrator | 2026-02-15 04:21:57.845069 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-02-15 04:21:57.845086 | orchestrator | Sunday 15 February 2026 04:21:47 +0000 (0:00:01.571) 0:00:33.726 ******* 2026-02-15 04:21:57.845106 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-15 04:21:57.845124 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-15 04:21:57.845226 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-02-15 04:21:57.845239 | orchestrator | 2026-02-15 04:21:57.845250 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-02-15 04:21:57.845261 | orchestrator | Sunday 15 February 2026 04:21:49 +0000 (0:00:01.388) 0:00:35.115 ******* 2026-02-15 04:21:57.845272 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:21:57.845284 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:21:57.845295 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:21:57.845306 | orchestrator | 2026-02-15 04:21:57.845320 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-02-15 04:21:57.845338 | orchestrator | Sunday 15 February 2026 04:21:49 +0000 (0:00:00.691) 0:00:35.806 ******* 2026-02-15 04:21:57.845356 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:21:57.845375 | orchestrator | 2026-02-15 04:21:57.845394 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-02-15 04:21:57.845413 | orchestrator | Sunday 15 February 2026 04:21:49 +0000 (0:00:00.143) 0:00:35.950 ******* 2026-02-15 04:21:57.845433 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:21:57.845453 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:21:57.845472 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:21:57.845489 | orchestrator | 2026-02-15 04:21:57.845530 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-15 04:21:57.845554 | orchestrator | Sunday 15 February 2026 04:21:50 +0000 (0:00:00.306) 0:00:36.256 ******* 2026-02-15 04:21:57.845572 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:21:57.845617 | orchestrator | 2026-02-15 04:21:57.845631 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-02-15 04:21:57.845644 | orchestrator | Sunday 15 February 2026 04:21:50 +0000 (0:00:00.747) 0:00:37.003 ******* 2026-02-15 04:21:57.845664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:21:57.845706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:21:57.845727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:21:57.845750 | orchestrator | 2026-02-15 04:21:57.845762 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-02-15 04:21:57.845773 | orchestrator | Sunday 15 February 2026 04:21:54 +0000 (0:00:03.855) 0:00:40.859 ******* 2026-02-15 04:21:57.845795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 04:22:01.355337 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:22:01.355466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 04:22:01.355513 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:22:01.355528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 04:22:01.355540 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:22:01.355552 | orchestrator | 2026-02-15 04:22:01.355565 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-02-15 04:22:01.355577 | orchestrator | Sunday 15 February 2026 04:21:57 +0000 (0:00:03.053) 0:00:43.913 ******* 2026-02-15 04:22:01.355614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 04:22:01.355646 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:22:01.355667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 04:22:01.355686 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:22:01.355717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 04:22:36.444611 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:22:36.444723 | orchestrator | 2026-02-15 04:22:36.444740 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-02-15 04:22:36.444770 | orchestrator | Sunday 15 February 2026 04:22:01 +0000 (0:00:03.508) 0:00:47.421 ******* 2026-02-15 04:22:36.444781 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:22:36.444793 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:22:36.444804 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:22:36.444815 | orchestrator | 2026-02-15 04:22:36.444826 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-02-15 04:22:36.444837 | orchestrator | Sunday 15 February 2026 04:22:04 +0000 (0:00:03.199) 0:00:50.621 ******* 2026-02-15 04:22:36.444853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:22:36.444869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:22:36.444932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:22:36.444947 | orchestrator | 2026-02-15 04:22:36.444959 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-02-15 04:22:36.444970 | orchestrator | Sunday 15 February 2026 04:22:08 +0000 (0:00:03.883) 0:00:54.504 ******* 2026-02-15 04:22:36.444981 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:22:36.444992 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:22:36.445003 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:22:36.445013 | orchestrator | 2026-02-15 04:22:36.445024 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-02-15 04:22:36.445035 | orchestrator | Sunday 15 February 2026 04:22:14 +0000 (0:00:05.794) 0:01:00.299 ******* 2026-02-15 04:22:36.445046 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:22:36.445057 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:22:36.445068 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:22:36.445079 | orchestrator | 2026-02-15 04:22:36.445090 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-02-15 04:22:36.445101 | orchestrator | Sunday 15 February 2026 04:22:17 +0000 (0:00:03.501) 0:01:03.800 ******* 2026-02-15 04:22:36.445111 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:22:36.445122 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:22:36.445133 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:22:36.445144 | orchestrator | 2026-02-15 04:22:36.445186 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-02-15 04:22:36.445200 | orchestrator | Sunday 15 February 2026 04:22:21 +0000 (0:00:03.353) 0:01:07.154 ******* 2026-02-15 04:22:36.445225 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:22:36.445237 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:22:36.445249 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:22:36.445262 | orchestrator | 2026-02-15 04:22:36.445275 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-02-15 04:22:36.445298 | orchestrator | Sunday 15 February 2026 04:22:24 +0000 (0:00:03.239) 0:01:10.394 ******* 2026-02-15 04:22:36.445311 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:22:36.445324 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:22:36.445336 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:22:36.445348 | orchestrator | 2026-02-15 04:22:36.445360 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-02-15 04:22:36.445373 | orchestrator | Sunday 15 February 2026 04:22:27 +0000 (0:00:03.670) 0:01:14.064 ******* 2026-02-15 04:22:36.445385 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:22:36.445398 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:22:36.445410 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:22:36.445423 | orchestrator | 2026-02-15 04:22:36.445435 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-02-15 04:22:36.445447 | orchestrator | Sunday 15 February 2026 04:22:28 +0000 (0:00:00.666) 0:01:14.731 ******* 2026-02-15 04:22:36.445459 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-15 04:22:36.445472 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:22:36.445486 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-15 04:22:36.445498 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:22:36.445510 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-02-15 04:22:36.445520 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:22:36.445531 | orchestrator | 2026-02-15 04:22:36.445542 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-02-15 04:22:36.445553 | orchestrator | Sunday 15 February 2026 04:22:31 +0000 (0:00:03.338) 0:01:18.069 ******* 2026-02-15 04:22:36.445564 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:22:36.445575 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:22:36.445586 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:22:36.445597 | orchestrator | 2026-02-15 04:22:36.445608 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-02-15 04:22:36.445626 | orchestrator | Sunday 15 February 2026 04:22:36 +0000 (0:00:04.437) 0:01:22.506 ******* 2026-02-15 04:23:50.114996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:23:50.115119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:23:50.115187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 04:23:50.115249 | orchestrator | 2026-02-15 04:23:50.115265 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-02-15 04:23:50.115278 | orchestrator | Sunday 15 February 2026 04:22:40 +0000 (0:00:04.053) 0:01:26.560 ******* 2026-02-15 04:23:50.115290 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:23:50.115302 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:23:50.115313 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:23:50.115324 | orchestrator | 2026-02-15 04:23:50.115336 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-02-15 04:23:50.115347 | orchestrator | Sunday 15 February 2026 04:22:41 +0000 (0:00:00.555) 0:01:27.116 ******* 2026-02-15 04:23:50.115369 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:23:50.115381 | orchestrator | 2026-02-15 04:23:50.115392 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-02-15 04:23:50.115403 | orchestrator | Sunday 15 February 2026 04:22:43 +0000 (0:00:02.282) 0:01:29.398 ******* 2026-02-15 04:23:50.115414 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:23:50.115430 | orchestrator | 2026-02-15 04:23:50.115448 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-02-15 04:23:50.115466 | orchestrator | Sunday 15 February 2026 04:22:45 +0000 (0:00:02.406) 0:01:31.805 ******* 2026-02-15 04:23:50.115483 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:23:50.115500 | orchestrator | 2026-02-15 04:23:50.115519 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-02-15 04:23:50.115538 | orchestrator | Sunday 15 February 2026 04:22:47 +0000 (0:00:02.152) 0:01:33.957 ******* 2026-02-15 04:23:50.115556 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:23:50.115572 | orchestrator | 2026-02-15 04:23:50.115583 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-02-15 04:23:50.115594 | orchestrator | Sunday 15 February 2026 04:23:16 +0000 (0:00:29.087) 0:02:03.045 ******* 2026-02-15 04:23:50.115605 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:23:50.115616 | orchestrator | 2026-02-15 04:23:50.115627 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-15 04:23:50.115638 | orchestrator | Sunday 15 February 2026 04:23:19 +0000 (0:00:02.370) 0:02:05.415 ******* 2026-02-15 04:23:50.115649 | orchestrator | 2026-02-15 04:23:50.115660 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-15 04:23:50.115671 | orchestrator | Sunday 15 February 2026 04:23:19 +0000 (0:00:00.070) 0:02:05.485 ******* 2026-02-15 04:23:50.115682 | orchestrator | 2026-02-15 04:23:50.115693 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-02-15 04:23:50.115704 | orchestrator | Sunday 15 February 2026 04:23:19 +0000 (0:00:00.071) 0:02:05.556 ******* 2026-02-15 04:23:50.115714 | orchestrator | 2026-02-15 04:23:50.115725 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-02-15 04:23:50.115736 | orchestrator | Sunday 15 February 2026 04:23:19 +0000 (0:00:00.086) 0:02:05.643 ******* 2026-02-15 04:23:50.115747 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:23:50.115758 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:23:50.115770 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:23:50.115781 | orchestrator | 2026-02-15 04:23:50.115792 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:23:50.115805 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-15 04:23:50.115817 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-15 04:23:50.115828 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-15 04:23:50.115839 | orchestrator | 2026-02-15 04:23:50.115850 | orchestrator | 2026-02-15 04:23:50.115861 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:23:50.115872 | orchestrator | Sunday 15 February 2026 04:23:50 +0000 (0:00:30.529) 0:02:36.173 ******* 2026-02-15 04:23:50.115883 | orchestrator | =============================================================================== 2026-02-15 04:23:50.115894 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.53s 2026-02-15 04:23:50.115904 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.09s 2026-02-15 04:23:50.115915 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.02s 2026-02-15 04:23:50.115942 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.79s 2026-02-15 04:23:50.451874 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.44s 2026-02-15 04:23:50.451974 | orchestrator | glance : Check glance containers ---------------------------------------- 4.05s 2026-02-15 04:23:50.451990 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.00s 2026-02-15 04:23:50.452002 | orchestrator | glance : Copying over config.json files for services -------------------- 3.88s 2026-02-15 04:23:50.452013 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.86s 2026-02-15 04:23:50.452024 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.67s 2026-02-15 04:23:50.452035 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.56s 2026-02-15 04:23:50.452047 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.55s 2026-02-15 04:23:50.452058 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.52s 2026-02-15 04:23:50.452069 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.51s 2026-02-15 04:23:50.452080 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.50s 2026-02-15 04:23:50.452091 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.44s 2026-02-15 04:23:50.452102 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.35s 2026-02-15 04:23:50.452113 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.34s 2026-02-15 04:23:50.452124 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.24s 2026-02-15 04:23:50.452135 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.20s 2026-02-15 04:23:52.786287 | orchestrator | 2026-02-15 04:23:52 | INFO  | Task ffc908cf-0b6b-4ae3-844a-d4dd0ab34ba0 (cinder) was prepared for execution. 2026-02-15 04:23:52.786403 | orchestrator | 2026-02-15 04:23:52 | INFO  | It takes a moment until task ffc908cf-0b6b-4ae3-844a-d4dd0ab34ba0 (cinder) has been started and output is visible here. 2026-02-15 04:24:28.793789 | orchestrator | 2026-02-15 04:24:28.793898 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:24:28.793910 | orchestrator | 2026-02-15 04:24:28.793918 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:24:28.793926 | orchestrator | Sunday 15 February 2026 04:23:57 +0000 (0:00:00.256) 0:00:00.257 ******* 2026-02-15 04:24:28.793934 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:24:28.793942 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:24:28.793949 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:24:28.793957 | orchestrator | 2026-02-15 04:24:28.793965 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:24:28.793973 | orchestrator | Sunday 15 February 2026 04:23:57 +0000 (0:00:00.313) 0:00:00.570 ******* 2026-02-15 04:24:28.793980 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-02-15 04:24:28.793989 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-02-15 04:24:28.793997 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-02-15 04:24:28.794004 | orchestrator | 2026-02-15 04:24:28.794011 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-02-15 04:24:28.794063 | orchestrator | 2026-02-15 04:24:28.794070 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-15 04:24:28.794079 | orchestrator | Sunday 15 February 2026 04:23:57 +0000 (0:00:00.453) 0:00:01.024 ******* 2026-02-15 04:24:28.794087 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:24:28.794096 | orchestrator | 2026-02-15 04:24:28.794105 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-02-15 04:24:28.794113 | orchestrator | Sunday 15 February 2026 04:23:58 +0000 (0:00:00.569) 0:00:01.594 ******* 2026-02-15 04:24:28.794122 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-02-15 04:24:28.794130 | orchestrator | 2026-02-15 04:24:28.794165 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-02-15 04:24:28.794173 | orchestrator | Sunday 15 February 2026 04:24:02 +0000 (0:00:03.609) 0:00:05.203 ******* 2026-02-15 04:24:28.794182 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-02-15 04:24:28.794190 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-02-15 04:24:28.794198 | orchestrator | 2026-02-15 04:24:28.794206 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-02-15 04:24:28.794213 | orchestrator | Sunday 15 February 2026 04:24:08 +0000 (0:00:06.810) 0:00:12.014 ******* 2026-02-15 04:24:28.794221 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:24:28.794229 | orchestrator | 2026-02-15 04:24:28.794237 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-02-15 04:24:28.794317 | orchestrator | Sunday 15 February 2026 04:24:12 +0000 (0:00:03.304) 0:00:15.318 ******* 2026-02-15 04:24:28.794327 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:24:28.794335 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-02-15 04:24:28.794343 | orchestrator | 2026-02-15 04:24:28.794351 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-02-15 04:24:28.794360 | orchestrator | Sunday 15 February 2026 04:24:16 +0000 (0:00:04.137) 0:00:19.456 ******* 2026-02-15 04:24:28.794369 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:24:28.794377 | orchestrator | 2026-02-15 04:24:28.794385 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-02-15 04:24:28.794406 | orchestrator | Sunday 15 February 2026 04:24:19 +0000 (0:00:03.289) 0:00:22.745 ******* 2026-02-15 04:24:28.794415 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-02-15 04:24:28.794422 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-02-15 04:24:28.794429 | orchestrator | 2026-02-15 04:24:28.794437 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-02-15 04:24:28.794443 | orchestrator | Sunday 15 February 2026 04:24:26 +0000 (0:00:07.326) 0:00:30.071 ******* 2026-02-15 04:24:28.794453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:24:28.794479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:24:28.794495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:24:28.794503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:28.794515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:28.794523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:28.794531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:28.794543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:34.597989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:34.598155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:34.598189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:34.598203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:34.598216 | orchestrator | 2026-02-15 04:24:34.598231 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-15 04:24:34.598244 | orchestrator | Sunday 15 February 2026 04:24:28 +0000 (0:00:02.006) 0:00:32.078 ******* 2026-02-15 04:24:34.598295 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:24:34.598316 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:24:34.598336 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:24:34.598355 | orchestrator | 2026-02-15 04:24:34.598368 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-15 04:24:34.598379 | orchestrator | Sunday 15 February 2026 04:24:29 +0000 (0:00:00.513) 0:00:32.591 ******* 2026-02-15 04:24:34.598399 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:24:34.598444 | orchestrator | 2026-02-15 04:24:34.598464 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-02-15 04:24:34.598480 | orchestrator | Sunday 15 February 2026 04:24:29 +0000 (0:00:00.564) 0:00:33.156 ******* 2026-02-15 04:24:34.598498 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-02-15 04:24:34.598517 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-02-15 04:24:34.598535 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-02-15 04:24:34.598554 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-02-15 04:24:34.598573 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-02-15 04:24:34.598593 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-02-15 04:24:34.598611 | orchestrator | 2026-02-15 04:24:34.598630 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-02-15 04:24:34.598650 | orchestrator | Sunday 15 February 2026 04:24:31 +0000 (0:00:01.598) 0:00:34.754 ******* 2026-02-15 04:24:34.598697 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-15 04:24:34.598722 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-15 04:24:34.598753 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-15 04:24:34.598774 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-15 04:24:34.598822 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-15 04:24:45.276195 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-02-15 04:24:45.276358 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-15 04:24:45.276394 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-15 04:24:45.276417 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-15 04:24:45.276451 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-15 04:24:45.276483 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-15 04:24:45.276495 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-02-15 04:24:45.276507 | orchestrator | 2026-02-15 04:24:45.276521 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-02-15 04:24:45.276534 | orchestrator | Sunday 15 February 2026 04:24:34 +0000 (0:00:03.308) 0:00:38.063 ******* 2026-02-15 04:24:45.276568 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-15 04:24:45.276580 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-15 04:24:45.276597 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-02-15 04:24:45.276609 | orchestrator | 2026-02-15 04:24:45.276620 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-02-15 04:24:45.276631 | orchestrator | Sunday 15 February 2026 04:24:36 +0000 (0:00:01.519) 0:00:39.583 ******* 2026-02-15 04:24:45.276644 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-02-15 04:24:45.276655 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-02-15 04:24:45.276666 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-02-15 04:24:45.276677 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-02-15 04:24:45.276696 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-02-15 04:24:45.276707 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-02-15 04:24:45.276720 | orchestrator | 2026-02-15 04:24:45.276733 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-02-15 04:24:45.276746 | orchestrator | Sunday 15 February 2026 04:24:39 +0000 (0:00:02.692) 0:00:42.275 ******* 2026-02-15 04:24:45.276760 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-02-15 04:24:45.276773 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-02-15 04:24:45.276786 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-02-15 04:24:45.276799 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-02-15 04:24:45.276811 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-02-15 04:24:45.276824 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-02-15 04:24:45.276837 | orchestrator | 2026-02-15 04:24:45.276850 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-02-15 04:24:45.276862 | orchestrator | Sunday 15 February 2026 04:24:40 +0000 (0:00:01.056) 0:00:43.331 ******* 2026-02-15 04:24:45.276876 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:24:45.276889 | orchestrator | 2026-02-15 04:24:45.276902 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-02-15 04:24:45.276915 | orchestrator | Sunday 15 February 2026 04:24:40 +0000 (0:00:00.133) 0:00:43.465 ******* 2026-02-15 04:24:45.276928 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:24:45.276941 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:24:45.276954 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:24:45.276966 | orchestrator | 2026-02-15 04:24:45.276978 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-15 04:24:45.276992 | orchestrator | Sunday 15 February 2026 04:24:40 +0000 (0:00:00.504) 0:00:43.970 ******* 2026-02-15 04:24:45.277005 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:24:45.277018 | orchestrator | 2026-02-15 04:24:45.277030 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-02-15 04:24:45.277043 | orchestrator | Sunday 15 February 2026 04:24:41 +0000 (0:00:00.599) 0:00:44.569 ******* 2026-02-15 04:24:45.277066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:24:46.148831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:24:46.148984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:24:46.149004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:46.149017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:46.149029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:46.149060 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:46.149073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:46.149097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:46.149109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:46.149121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:46.149133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:46.149145 | orchestrator | 2026-02-15 04:24:46.149158 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-02-15 04:24:46.149171 | orchestrator | Sunday 15 February 2026 04:24:45 +0000 (0:00:04.000) 0:00:48.569 ******* 2026-02-15 04:24:46.149191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 04:24:46.257560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.257694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.257722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.257743 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:24:46.257764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 04:24:46.257786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.257862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.257893 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.257913 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:24:46.257932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 04:24:46.257953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.257972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.258009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.258106 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:24:46.258129 | orchestrator | 2026-02-15 04:24:46.258151 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-02-15 04:24:46.258188 | orchestrator | Sunday 15 February 2026 04:24:46 +0000 (0:00:00.881) 0:00:49.451 ******* 2026-02-15 04:24:46.846938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 04:24:46.847035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.847053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.847066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.847104 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:24:46.847117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 04:24:46.847152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.847167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.847180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.847192 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:24:46.847203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 04:24:46.847224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:24:46.847243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 04:24:51.595850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 04:24:51.595960 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:24:51.595978 | orchestrator | 2026-02-15 04:24:51.595992 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-02-15 04:24:51.596005 | orchestrator | Sunday 15 February 2026 04:24:47 +0000 (0:00:00.933) 0:00:50.385 ******* 2026-02-15 04:24:51.596019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:24:51.596033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:24:51.596070 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:24:51.596099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:51.596119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:51.596132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:51.596144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:51.596155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:51.596177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:24:51.596195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:04.353926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:04.354123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:04.354143 | orchestrator | 2026-02-15 04:25:04.354189 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-02-15 04:25:04.354204 | orchestrator | Sunday 15 February 2026 04:24:51 +0000 (0:00:04.507) 0:00:54.892 ******* 2026-02-15 04:25:04.354215 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-15 04:25:04.354227 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-15 04:25:04.354238 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-02-15 04:25:04.354275 | orchestrator | 2026-02-15 04:25:04.354335 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-02-15 04:25:04.354348 | orchestrator | Sunday 15 February 2026 04:24:53 +0000 (0:00:01.851) 0:00:56.744 ******* 2026-02-15 04:25:04.354360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:25:04.354374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:25:04.354417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:25:04.354432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:04.354446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:04.354468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:04.354481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:04.354494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:04.354522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:06.614792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:06.614880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:06.614916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:06.614930 | orchestrator | 2026-02-15 04:25:06.614943 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-02-15 04:25:06.614957 | orchestrator | Sunday 15 February 2026 04:25:04 +0000 (0:00:10.881) 0:01:07.626 ******* 2026-02-15 04:25:06.614968 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:25:06.614980 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:25:06.614991 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:25:06.615002 | orchestrator | 2026-02-15 04:25:06.615014 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-02-15 04:25:06.615025 | orchestrator | Sunday 15 February 2026 04:25:05 +0000 (0:00:01.527) 0:01:09.153 ******* 2026-02-15 04:25:06.615038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 04:25:06.615062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:25:06.615092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 04:25:06.615112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 04:25:06.615124 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:25:06.615136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 04:25:06.615148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:25:06.615164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 04:25:06.615184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 04:25:10.067031 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:25:10.067187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-02-15 04:25:10.067210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:25:10.067224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 04:25:10.067236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 04:25:10.067248 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:25:10.067261 | orchestrator | 2026-02-15 04:25:10.067273 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-02-15 04:25:10.067286 | orchestrator | Sunday 15 February 2026 04:25:06 +0000 (0:00:00.743) 0:01:09.896 ******* 2026-02-15 04:25:10.067342 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:25:10.067354 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:25:10.067379 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:25:10.067391 | orchestrator | 2026-02-15 04:25:10.067403 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-02-15 04:25:10.067414 | orchestrator | Sunday 15 February 2026 04:25:07 +0000 (0:00:00.484) 0:01:10.380 ******* 2026-02-15 04:25:10.067443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:25:10.067474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:25:10.067487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-02-15 04:25:10.067499 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:10.067510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:10.067527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:25:10.067554 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:26:48.910475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:26:48.910598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-02-15 04:26:48.910617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:26:48.910647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:26:48.910687 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-02-15 04:26:48.910701 | orchestrator | 2026-02-15 04:26:48.910715 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-02-15 04:26:48.910728 | orchestrator | Sunday 15 February 2026 04:25:10 +0000 (0:00:02.968) 0:01:13.349 ******* 2026-02-15 04:26:48.910739 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:26:48.910751 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:26:48.910762 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:26:48.910773 | orchestrator | 2026-02-15 04:26:48.910785 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-02-15 04:26:48.910797 | orchestrator | Sunday 15 February 2026 04:25:10 +0000 (0:00:00.318) 0:01:13.668 ******* 2026-02-15 04:26:48.910808 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:26:48.910818 | orchestrator | 2026-02-15 04:26:48.910847 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-02-15 04:26:48.910859 | orchestrator | Sunday 15 February 2026 04:25:12 +0000 (0:00:02.210) 0:01:15.879 ******* 2026-02-15 04:26:48.910870 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:26:48.910881 | orchestrator | 2026-02-15 04:26:48.910892 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-02-15 04:26:48.910903 | orchestrator | Sunday 15 February 2026 04:25:14 +0000 (0:00:02.307) 0:01:18.187 ******* 2026-02-15 04:26:48.910914 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:26:48.910935 | orchestrator | 2026-02-15 04:26:48.910954 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-15 04:26:48.910975 | orchestrator | Sunday 15 February 2026 04:25:34 +0000 (0:00:19.862) 0:01:38.049 ******* 2026-02-15 04:26:48.910994 | orchestrator | 2026-02-15 04:26:48.911013 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-15 04:26:48.911032 | orchestrator | Sunday 15 February 2026 04:25:34 +0000 (0:00:00.074) 0:01:38.124 ******* 2026-02-15 04:26:48.911050 | orchestrator | 2026-02-15 04:26:48.911070 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-02-15 04:26:48.911088 | orchestrator | Sunday 15 February 2026 04:25:34 +0000 (0:00:00.073) 0:01:38.197 ******* 2026-02-15 04:26:48.911102 | orchestrator | 2026-02-15 04:26:48.911115 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-02-15 04:26:48.911128 | orchestrator | Sunday 15 February 2026 04:25:35 +0000 (0:00:00.071) 0:01:38.269 ******* 2026-02-15 04:26:48.911140 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:26:48.911153 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:26:48.911165 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:26:48.911178 | orchestrator | 2026-02-15 04:26:48.911190 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-02-15 04:26:48.911202 | orchestrator | Sunday 15 February 2026 04:26:06 +0000 (0:00:31.105) 0:02:09.375 ******* 2026-02-15 04:26:48.911215 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:26:48.911227 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:26:48.911239 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:26:48.911252 | orchestrator | 2026-02-15 04:26:48.911264 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-02-15 04:26:48.911277 | orchestrator | Sunday 15 February 2026 04:26:16 +0000 (0:00:10.179) 0:02:19.554 ******* 2026-02-15 04:26:48.911324 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:26:48.911337 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:26:48.911348 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:26:48.911359 | orchestrator | 2026-02-15 04:26:48.911369 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-02-15 04:26:48.911380 | orchestrator | Sunday 15 February 2026 04:26:42 +0000 (0:00:26.504) 0:02:46.058 ******* 2026-02-15 04:26:48.911391 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:26:48.911401 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:26:48.911412 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:26:48.911423 | orchestrator | 2026-02-15 04:26:48.911434 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-02-15 04:26:48.911445 | orchestrator | Sunday 15 February 2026 04:26:48 +0000 (0:00:05.760) 0:02:51.819 ******* 2026-02-15 04:26:48.911456 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:26:48.911466 | orchestrator | 2026-02-15 04:26:48.911477 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:26:48.911489 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-15 04:26:48.911502 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 04:26:48.911513 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 04:26:48.911523 | orchestrator | 2026-02-15 04:26:48.911534 | orchestrator | 2026-02-15 04:26:48.911552 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:26:48.911564 | orchestrator | Sunday 15 February 2026 04:26:48 +0000 (0:00:00.268) 0:02:52.087 ******* 2026-02-15 04:26:48.911574 | orchestrator | =============================================================================== 2026-02-15 04:26:48.911585 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 31.11s 2026-02-15 04:26:48.911596 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 26.50s 2026-02-15 04:26:48.911606 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.86s 2026-02-15 04:26:48.911617 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.88s 2026-02-15 04:26:48.911628 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.18s 2026-02-15 04:26:48.911638 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.33s 2026-02-15 04:26:48.911649 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.81s 2026-02-15 04:26:48.911659 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.76s 2026-02-15 04:26:48.911670 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.51s 2026-02-15 04:26:48.911680 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.14s 2026-02-15 04:26:48.911691 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.00s 2026-02-15 04:26:48.911702 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.61s 2026-02-15 04:26:48.911712 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.31s 2026-02-15 04:26:48.911723 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.30s 2026-02-15 04:26:48.911743 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.29s 2026-02-15 04:26:49.271475 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.97s 2026-02-15 04:26:49.271581 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 2.69s 2026-02-15 04:26:49.271601 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.31s 2026-02-15 04:26:49.271647 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.21s 2026-02-15 04:26:49.271665 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.01s 2026-02-15 04:26:51.635532 | orchestrator | 2026-02-15 04:26:51 | INFO  | Task feb4139e-a3e9-48df-a4ba-21e35a03ca3b (barbican) was prepared for execution. 2026-02-15 04:26:51.635650 | orchestrator | 2026-02-15 04:26:51 | INFO  | It takes a moment until task feb4139e-a3e9-48df-a4ba-21e35a03ca3b (barbican) has been started and output is visible here. 2026-02-15 04:27:36.763386 | orchestrator | 2026-02-15 04:27:36.763485 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:27:36.763498 | orchestrator | 2026-02-15 04:27:36.763506 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:27:36.763513 | orchestrator | Sunday 15 February 2026 04:26:55 +0000 (0:00:00.270) 0:00:00.270 ******* 2026-02-15 04:27:36.763520 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:27:36.763529 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:27:36.763535 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:27:36.763541 | orchestrator | 2026-02-15 04:27:36.763548 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:27:36.763555 | orchestrator | Sunday 15 February 2026 04:26:56 +0000 (0:00:00.307) 0:00:00.577 ******* 2026-02-15 04:27:36.763562 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-02-15 04:27:36.763569 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-02-15 04:27:36.763576 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-02-15 04:27:36.763583 | orchestrator | 2026-02-15 04:27:36.763590 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-02-15 04:27:36.763597 | orchestrator | 2026-02-15 04:27:36.763604 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-15 04:27:36.763611 | orchestrator | Sunday 15 February 2026 04:26:56 +0000 (0:00:00.428) 0:00:01.006 ******* 2026-02-15 04:27:36.763619 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:27:36.763627 | orchestrator | 2026-02-15 04:27:36.763634 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-02-15 04:27:36.763640 | orchestrator | Sunday 15 February 2026 04:26:57 +0000 (0:00:00.582) 0:00:01.588 ******* 2026-02-15 04:27:36.763648 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-02-15 04:27:36.763654 | orchestrator | 2026-02-15 04:27:36.763661 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-02-15 04:27:36.763668 | orchestrator | Sunday 15 February 2026 04:27:00 +0000 (0:00:03.612) 0:00:05.201 ******* 2026-02-15 04:27:36.763674 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-02-15 04:27:36.763681 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-02-15 04:27:36.763688 | orchestrator | 2026-02-15 04:27:36.763695 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-02-15 04:27:36.763701 | orchestrator | Sunday 15 February 2026 04:27:07 +0000 (0:00:06.774) 0:00:11.975 ******* 2026-02-15 04:27:36.763707 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:27:36.763714 | orchestrator | 2026-02-15 04:27:36.763722 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-02-15 04:27:36.763743 | orchestrator | Sunday 15 February 2026 04:27:10 +0000 (0:00:03.329) 0:00:15.305 ******* 2026-02-15 04:27:36.763751 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:27:36.763758 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-02-15 04:27:36.763764 | orchestrator | 2026-02-15 04:27:36.763771 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-02-15 04:27:36.763778 | orchestrator | Sunday 15 February 2026 04:27:15 +0000 (0:00:04.274) 0:00:19.579 ******* 2026-02-15 04:27:36.763805 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:27:36.763813 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-02-15 04:27:36.763820 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-02-15 04:27:36.763827 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-02-15 04:27:36.763833 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-02-15 04:27:36.763839 | orchestrator | 2026-02-15 04:27:36.763846 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-02-15 04:27:36.763852 | orchestrator | Sunday 15 February 2026 04:27:31 +0000 (0:00:16.088) 0:00:35.668 ******* 2026-02-15 04:27:36.763858 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-02-15 04:27:36.763865 | orchestrator | 2026-02-15 04:27:36.763872 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-02-15 04:27:36.763878 | orchestrator | Sunday 15 February 2026 04:27:35 +0000 (0:00:03.903) 0:00:39.572 ******* 2026-02-15 04:27:36.763888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:36.763914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:36.763922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:36.763934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:36.763950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:36.763957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:36.763970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:42.756864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:42.756983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:42.757000 | orchestrator | 2026-02-15 04:27:42.757014 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-02-15 04:27:42.757028 | orchestrator | Sunday 15 February 2026 04:27:36 +0000 (0:00:01.607) 0:00:41.180 ******* 2026-02-15 04:27:42.757064 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-02-15 04:27:42.757075 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-02-15 04:27:42.757086 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-02-15 04:27:42.757098 | orchestrator | 2026-02-15 04:27:42.757109 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-02-15 04:27:42.757135 | orchestrator | Sunday 15 February 2026 04:27:37 +0000 (0:00:01.153) 0:00:42.333 ******* 2026-02-15 04:27:42.757167 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:27:42.757179 | orchestrator | 2026-02-15 04:27:42.757190 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-02-15 04:27:42.757201 | orchestrator | Sunday 15 February 2026 04:27:38 +0000 (0:00:00.339) 0:00:42.673 ******* 2026-02-15 04:27:42.757234 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:27:42.757246 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:27:42.757257 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:27:42.757267 | orchestrator | 2026-02-15 04:27:42.757279 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-15 04:27:42.757289 | orchestrator | Sunday 15 February 2026 04:27:38 +0000 (0:00:00.319) 0:00:42.992 ******* 2026-02-15 04:27:42.757301 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:27:42.757312 | orchestrator | 2026-02-15 04:27:42.757323 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-02-15 04:27:42.757334 | orchestrator | Sunday 15 February 2026 04:27:39 +0000 (0:00:00.560) 0:00:43.553 ******* 2026-02-15 04:27:42.757347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:42.757379 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:42.757393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:42.757423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:42.757438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:42.757451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:42.757464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:42.757486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:44.215822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:44.215943 | orchestrator | 2026-02-15 04:27:44.215959 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-02-15 04:27:44.215970 | orchestrator | Sunday 15 February 2026 04:27:42 +0000 (0:00:03.619) 0:00:47.173 ******* 2026-02-15 04:27:44.215996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 04:27:44.216007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:27:44.216020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:27:44.216030 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:27:44.216042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 04:27:44.216070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:27:44.216088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:27:44.216098 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:27:44.216113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 04:27:44.216125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:27:44.216135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:27:44.216145 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:27:44.216155 | orchestrator | 2026-02-15 04:27:44.216165 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-02-15 04:27:44.216175 | orchestrator | Sunday 15 February 2026 04:27:43 +0000 (0:00:00.604) 0:00:47.777 ******* 2026-02-15 04:27:44.216193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 04:27:47.665489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:27:47.665635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:27:47.665657 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:27:47.665673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 04:27:47.665688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:27:47.665700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:27:47.665748 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:27:47.665791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 04:27:47.665822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:27:47.665841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:27:47.665899 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:27:47.665920 | orchestrator | 2026-02-15 04:27:47.665942 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-02-15 04:27:47.665964 | orchestrator | Sunday 15 February 2026 04:27:44 +0000 (0:00:00.863) 0:00:48.641 ******* 2026-02-15 04:27:47.665983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:47.666091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:47.666125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:57.048594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.048728 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.048759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.048782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.048823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.048836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.048849 | orchestrator | 2026-02-15 04:27:57.048863 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-02-15 04:27:57.048876 | orchestrator | Sunday 15 February 2026 04:27:47 +0000 (0:00:03.443) 0:00:52.084 ******* 2026-02-15 04:27:57.048888 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:27:57.048901 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:27:57.048912 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:27:57.048923 | orchestrator | 2026-02-15 04:27:57.048953 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-02-15 04:27:57.048965 | orchestrator | Sunday 15 February 2026 04:27:49 +0000 (0:00:01.526) 0:00:53.611 ******* 2026-02-15 04:27:57.048976 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:27:57.048987 | orchestrator | 2026-02-15 04:27:57.048999 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-02-15 04:27:57.049011 | orchestrator | Sunday 15 February 2026 04:27:50 +0000 (0:00:00.924) 0:00:54.536 ******* 2026-02-15 04:27:57.049030 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:27:57.049048 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:27:57.049066 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:27:57.049084 | orchestrator | 2026-02-15 04:27:57.049102 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-02-15 04:27:57.049122 | orchestrator | Sunday 15 February 2026 04:27:50 +0000 (0:00:00.546) 0:00:55.082 ******* 2026-02-15 04:27:57.049146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:57.049180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:57.049229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:27:57.049295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.890781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.890871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.890896 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.890903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.890907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:27:57.890912 | orchestrator | 2026-02-15 04:27:57.890917 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-02-15 04:27:57.890923 | orchestrator | Sunday 15 February 2026 04:27:57 +0000 (0:00:06.382) 0:01:01.465 ******* 2026-02-15 04:27:57.890937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 04:27:57.890946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:27:57.890951 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:27:57.890961 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:27:57.890967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 04:27:57.890972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:27:57.890979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:27:57.890985 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:27:57.890999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-02-15 04:28:00.485999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:28:00.487047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:28:00.487081 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:28:00.487092 | orchestrator | 2026-02-15 04:28:00.487101 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-02-15 04:28:00.487110 | orchestrator | Sunday 15 February 2026 04:27:57 +0000 (0:00:00.844) 0:01:02.309 ******* 2026-02-15 04:28:00.487119 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:28:00.487128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:28:00.487167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-02-15 04:28:00.487206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:28:00.487218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:28:00.487225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:28:00.487233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:28:00.487241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:28:00.487252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:28:00.487265 | orchestrator | 2026-02-15 04:28:00.487273 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-02-15 04:28:00.487285 | orchestrator | Sunday 15 February 2026 04:28:00 +0000 (0:00:02.591) 0:01:04.900 ******* 2026-02-15 04:28:36.916012 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:28:36.916197 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:28:36.916226 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:28:36.916246 | orchestrator | 2026-02-15 04:28:36.916268 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-02-15 04:28:36.916290 | orchestrator | Sunday 15 February 2026 04:28:00 +0000 (0:00:00.331) 0:01:05.232 ******* 2026-02-15 04:28:36.916309 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:28:36.916330 | orchestrator | 2026-02-15 04:28:36.916342 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-02-15 04:28:36.916354 | orchestrator | Sunday 15 February 2026 04:28:03 +0000 (0:00:02.293) 0:01:07.526 ******* 2026-02-15 04:28:36.916365 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:28:36.916376 | orchestrator | 2026-02-15 04:28:36.916387 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-02-15 04:28:36.916399 | orchestrator | Sunday 15 February 2026 04:28:05 +0000 (0:00:02.553) 0:01:10.079 ******* 2026-02-15 04:28:36.916409 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:28:36.916420 | orchestrator | 2026-02-15 04:28:36.916431 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-15 04:28:36.916442 | orchestrator | Sunday 15 February 2026 04:28:18 +0000 (0:00:12.853) 0:01:22.933 ******* 2026-02-15 04:28:36.916453 | orchestrator | 2026-02-15 04:28:36.916464 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-15 04:28:36.916475 | orchestrator | Sunday 15 February 2026 04:28:18 +0000 (0:00:00.072) 0:01:23.005 ******* 2026-02-15 04:28:36.916485 | orchestrator | 2026-02-15 04:28:36.916496 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-02-15 04:28:36.916507 | orchestrator | Sunday 15 February 2026 04:28:18 +0000 (0:00:00.073) 0:01:23.079 ******* 2026-02-15 04:28:36.916518 | orchestrator | 2026-02-15 04:28:36.916529 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-02-15 04:28:36.916540 | orchestrator | Sunday 15 February 2026 04:28:18 +0000 (0:00:00.072) 0:01:23.151 ******* 2026-02-15 04:28:36.916552 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:28:36.916565 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:28:36.916578 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:28:36.916591 | orchestrator | 2026-02-15 04:28:36.916604 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-02-15 04:28:36.916616 | orchestrator | Sunday 15 February 2026 04:28:26 +0000 (0:00:07.857) 0:01:31.009 ******* 2026-02-15 04:28:36.916629 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:28:36.916641 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:28:36.916653 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:28:36.916664 | orchestrator | 2026-02-15 04:28:36.916675 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-02-15 04:28:36.916686 | orchestrator | Sunday 15 February 2026 04:28:31 +0000 (0:00:04.862) 0:01:35.871 ******* 2026-02-15 04:28:36.916697 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:28:36.916708 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:28:36.916719 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:28:36.916730 | orchestrator | 2026-02-15 04:28:36.916742 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:28:36.916754 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 04:28:36.916767 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 04:28:36.916805 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 04:28:36.916817 | orchestrator | 2026-02-15 04:28:36.916828 | orchestrator | 2026-02-15 04:28:36.916839 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:28:36.916850 | orchestrator | Sunday 15 February 2026 04:28:36 +0000 (0:00:05.138) 0:01:41.010 ******* 2026-02-15 04:28:36.916861 | orchestrator | =============================================================================== 2026-02-15 04:28:36.916872 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.09s 2026-02-15 04:28:36.916883 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.85s 2026-02-15 04:28:36.916894 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.86s 2026-02-15 04:28:36.916904 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.77s 2026-02-15 04:28:36.916915 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 6.38s 2026-02-15 04:28:36.916926 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 5.14s 2026-02-15 04:28:36.916937 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 4.86s 2026-02-15 04:28:36.916948 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.27s 2026-02-15 04:28:36.916959 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.90s 2026-02-15 04:28:36.916970 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.62s 2026-02-15 04:28:36.916996 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.61s 2026-02-15 04:28:36.917007 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.44s 2026-02-15 04:28:36.917018 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.33s 2026-02-15 04:28:36.917029 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.59s 2026-02-15 04:28:36.917040 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.55s 2026-02-15 04:28:36.917071 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.29s 2026-02-15 04:28:36.917083 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.61s 2026-02-15 04:28:36.917094 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 1.53s 2026-02-15 04:28:36.917104 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.15s 2026-02-15 04:28:36.917115 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 0.93s 2026-02-15 04:28:39.269634 | orchestrator | 2026-02-15 04:28:39 | INFO  | Task c07068f1-26ba-4429-9c87-16fc750119b1 (designate) was prepared for execution. 2026-02-15 04:28:39.269745 | orchestrator | 2026-02-15 04:28:39 | INFO  | It takes a moment until task c07068f1-26ba-4429-9c87-16fc750119b1 (designate) has been started and output is visible here. 2026-02-15 04:29:11.347294 | orchestrator | 2026-02-15 04:29:11.347412 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:29:11.347429 | orchestrator | 2026-02-15 04:29:11.347441 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:29:11.347454 | orchestrator | Sunday 15 February 2026 04:28:43 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-02-15 04:29:11.347541 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:29:11.347555 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:29:11.347566 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:29:11.347577 | orchestrator | 2026-02-15 04:29:11.347588 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:29:11.347600 | orchestrator | Sunday 15 February 2026 04:28:43 +0000 (0:00:00.320) 0:00:00.598 ******* 2026-02-15 04:29:11.347612 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-02-15 04:29:11.347647 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-02-15 04:29:11.347658 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-02-15 04:29:11.347670 | orchestrator | 2026-02-15 04:29:11.347681 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-02-15 04:29:11.347692 | orchestrator | 2026-02-15 04:29:11.347703 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-15 04:29:11.347720 | orchestrator | Sunday 15 February 2026 04:28:44 +0000 (0:00:00.457) 0:00:01.056 ******* 2026-02-15 04:29:11.347740 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:29:11.347759 | orchestrator | 2026-02-15 04:29:11.347778 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-02-15 04:29:11.347797 | orchestrator | Sunday 15 February 2026 04:28:44 +0000 (0:00:00.565) 0:00:01.621 ******* 2026-02-15 04:29:11.347815 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-02-15 04:29:11.347834 | orchestrator | 2026-02-15 04:29:11.347853 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-02-15 04:29:11.347872 | orchestrator | Sunday 15 February 2026 04:28:48 +0000 (0:00:03.421) 0:00:05.043 ******* 2026-02-15 04:29:11.347891 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-02-15 04:29:11.347913 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-02-15 04:29:11.347933 | orchestrator | 2026-02-15 04:29:11.347954 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-02-15 04:29:11.347974 | orchestrator | Sunday 15 February 2026 04:28:54 +0000 (0:00:06.537) 0:00:11.580 ******* 2026-02-15 04:29:11.347994 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:29:11.348015 | orchestrator | 2026-02-15 04:29:11.348034 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-02-15 04:29:11.348055 | orchestrator | Sunday 15 February 2026 04:28:58 +0000 (0:00:03.350) 0:00:14.930 ******* 2026-02-15 04:29:11.348069 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:29:11.348101 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-02-15 04:29:11.348115 | orchestrator | 2026-02-15 04:29:11.348128 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-02-15 04:29:11.348140 | orchestrator | Sunday 15 February 2026 04:29:02 +0000 (0:00:04.057) 0:00:18.988 ******* 2026-02-15 04:29:11.348154 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:29:11.348167 | orchestrator | 2026-02-15 04:29:11.348179 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-02-15 04:29:11.348191 | orchestrator | Sunday 15 February 2026 04:29:05 +0000 (0:00:03.193) 0:00:22.181 ******* 2026-02-15 04:29:11.348202 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-02-15 04:29:11.348213 | orchestrator | 2026-02-15 04:29:11.348223 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-02-15 04:29:11.348234 | orchestrator | Sunday 15 February 2026 04:29:09 +0000 (0:00:03.758) 0:00:25.940 ******* 2026-02-15 04:29:11.348265 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:11.348317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:11.348331 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:11.348344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:11.348357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:11.348374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:11.348387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:11.348412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999490 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:17.999515 | orchestrator | 2026-02-15 04:29:17.999526 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-02-15 04:29:17.999537 | orchestrator | Sunday 15 February 2026 04:29:12 +0000 (0:00:02.904) 0:00:28.844 ******* 2026-02-15 04:29:17.999546 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:29:17.999556 | orchestrator | 2026-02-15 04:29:17.999565 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-02-15 04:29:17.999587 | orchestrator | Sunday 15 February 2026 04:29:12 +0000 (0:00:00.144) 0:00:28.988 ******* 2026-02-15 04:29:17.999596 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:29:17.999613 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:29:17.999622 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:29:17.999631 | orchestrator | 2026-02-15 04:29:17.999640 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-15 04:29:17.999653 | orchestrator | Sunday 15 February 2026 04:29:12 +0000 (0:00:00.507) 0:00:29.496 ******* 2026-02-15 04:29:17.999663 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:29:17.999673 | orchestrator | 2026-02-15 04:29:17.999681 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-02-15 04:29:17.999690 | orchestrator | Sunday 15 February 2026 04:29:13 +0000 (0:00:00.587) 0:00:30.084 ******* 2026-02-15 04:29:17.999700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:17.999719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:19.883883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:19.883984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884055 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:19.884220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:20.778258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:20.778336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:20.778357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:20.778362 | orchestrator | 2026-02-15 04:29:20.778376 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-02-15 04:29:20.778382 | orchestrator | Sunday 15 February 2026 04:29:19 +0000 (0:00:06.432) 0:00:36.516 ******* 2026-02-15 04:29:20.778388 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:20.778393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 04:29:20.778408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:20.778413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:20.778424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:20.778431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:29:20.778436 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:29:20.778441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:20.778445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 04:29:20.778449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:20.778455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.575451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.575564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.575583 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:29:21.575618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:21.575632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 04:29:21.575645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.575657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.575707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.575721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.575734 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:29:21.575746 | orchestrator | 2026-02-15 04:29:21.575764 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-02-15 04:29:21.575777 | orchestrator | Sunday 15 February 2026 04:29:20 +0000 (0:00:01.002) 0:00:37.518 ******* 2026-02-15 04:29:21.575788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:21.575800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 04:29:21.575812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.575839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.892588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.892681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.892713 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:29:21.892727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:21.892740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 04:29:21.892751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.892783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.892810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.892821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.892832 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:29:21.892847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:21.892858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 04:29:21.892869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.892886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:21.892905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:26.274991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:29:26.275120 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:29:26.275131 | orchestrator | 2026-02-15 04:29:26.275137 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-02-15 04:29:26.275155 | orchestrator | Sunday 15 February 2026 04:29:21 +0000 (0:00:01.005) 0:00:38.524 ******* 2026-02-15 04:29:26.275161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:26.275168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:26.275187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:26.275206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:26.275213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:26.275221 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:26.275226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:26.275231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:26.275240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:26.275246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:26.275256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:37.812113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:37.812266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:37.812297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:37.812335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:37.812357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:37.812377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:37.812422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:37.812443 | orchestrator | 2026-02-15 04:29:37.812464 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-02-15 04:29:37.812485 | orchestrator | Sunday 15 February 2026 04:29:28 +0000 (0:00:06.247) 0:00:44.771 ******* 2026-02-15 04:29:37.812517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:37.812543 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:37.812573 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:37.812588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:37.812614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:46.116881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117187 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:46.117363 | orchestrator | 2026-02-15 04:29:46.117382 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-02-15 04:29:46.117400 | orchestrator | Sunday 15 February 2026 04:29:42 +0000 (0:00:14.406) 0:00:59.178 ******* 2026-02-15 04:29:46.117428 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-15 04:29:50.433996 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-15 04:29:50.434283 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-02-15 04:29:50.434314 | orchestrator | 2026-02-15 04:29:50.434354 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-02-15 04:29:50.434375 | orchestrator | Sunday 15 February 2026 04:29:46 +0000 (0:00:03.571) 0:01:02.750 ******* 2026-02-15 04:29:50.434392 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-15 04:29:50.434438 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-15 04:29:50.434457 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-02-15 04:29:50.434474 | orchestrator | 2026-02-15 04:29:50.434493 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-02-15 04:29:50.434512 | orchestrator | Sunday 15 February 2026 04:29:48 +0000 (0:00:02.519) 0:01:05.269 ******* 2026-02-15 04:29:50.434536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:50.434563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:50.434584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:50.434632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:50.434666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:50.434702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:50.434744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:50.434766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:50.434785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:50.434804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:50.434848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:53.200161 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:53.200279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:53.200298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:53.200312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:53.200324 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:53.200336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:53.200404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:53.200419 | orchestrator | 2026-02-15 04:29:53.200433 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-02-15 04:29:53.200446 | orchestrator | Sunday 15 February 2026 04:29:51 +0000 (0:00:02.849) 0:01:08.119 ******* 2026-02-15 04:29:53.200458 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:53.200472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:53.200483 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:53.200495 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:53.200526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:54.286272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:54.286381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:54.286397 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:54.286410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:54.286422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:54.286463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:54.286506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:54.286520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:54.286532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:54.286544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:54.286555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:54.286567 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:54.286605 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:29:54.286618 | orchestrator | 2026-02-15 04:29:54.286631 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-15 04:29:54.286650 | orchestrator | Sunday 15 February 2026 04:29:54 +0000 (0:00:02.793) 0:01:10.912 ******* 2026-02-15 04:29:55.283032 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:29:55.283173 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:29:55.283188 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:29:55.283200 | orchestrator | 2026-02-15 04:29:55.283213 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-02-15 04:29:55.283225 | orchestrator | Sunday 15 February 2026 04:29:54 +0000 (0:00:00.330) 0:01:11.242 ******* 2026-02-15 04:29:55.283240 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:55.283255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 04:29:55.283268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:55.283307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:55.283321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:55.283365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:29:55.283379 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:29:55.283390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:55.283403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 04:29:55.283414 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:55.283434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:55.283446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:55.283470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:29:58.860277 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:29:58.860448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-02-15 04:29:58.860482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 04:29:58.860505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 04:29:58.860555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 04:29:58.860579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 04:29:58.860621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:29:58.860645 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:29:58.860665 | orchestrator | 2026-02-15 04:29:58.860710 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-02-15 04:29:58.860733 | orchestrator | Sunday 15 February 2026 04:29:55 +0000 (0:00:00.776) 0:01:12.019 ******* 2026-02-15 04:29:58.860753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:58.860776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:58.860813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-02-15 04:29:58.860836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:29:58.860877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:30:00.744706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-02-15 04:30:00.744817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.744853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.744889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.744902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.744916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.744959 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.744973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.744986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.745004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.745016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.745027 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.745149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:30:00.745165 | orchestrator | 2026-02-15 04:30:00.745179 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-02-15 04:30:00.745193 | orchestrator | Sunday 15 February 2026 04:30:00 +0000 (0:00:05.046) 0:01:17.066 ******* 2026-02-15 04:30:00.745204 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:30:00.745226 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:31:30.871680 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:31:30.871789 | orchestrator | 2026-02-15 04:31:30.871806 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-02-15 04:31:30.871819 | orchestrator | Sunday 15 February 2026 04:30:00 +0000 (0:00:00.310) 0:01:17.377 ******* 2026-02-15 04:31:30.871831 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-02-15 04:31:30.871843 | orchestrator | 2026-02-15 04:31:30.871854 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-02-15 04:31:30.871865 | orchestrator | Sunday 15 February 2026 04:30:03 +0000 (0:00:02.388) 0:01:19.765 ******* 2026-02-15 04:31:30.871877 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-02-15 04:31:30.871888 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-02-15 04:31:30.871899 | orchestrator | 2026-02-15 04:31:30.871910 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-02-15 04:31:30.871921 | orchestrator | Sunday 15 February 2026 04:30:05 +0000 (0:00:02.496) 0:01:22.261 ******* 2026-02-15 04:31:30.871958 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:31:30.871969 | orchestrator | 2026-02-15 04:31:30.871980 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-15 04:31:30.872095 | orchestrator | Sunday 15 February 2026 04:30:21 +0000 (0:00:16.369) 0:01:38.631 ******* 2026-02-15 04:31:30.872107 | orchestrator | 2026-02-15 04:31:30.872118 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-15 04:31:30.872129 | orchestrator | Sunday 15 February 2026 04:30:22 +0000 (0:00:00.073) 0:01:38.705 ******* 2026-02-15 04:31:30.872139 | orchestrator | 2026-02-15 04:31:30.872150 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-02-15 04:31:30.872162 | orchestrator | Sunday 15 February 2026 04:30:22 +0000 (0:00:00.069) 0:01:38.775 ******* 2026-02-15 04:31:30.872173 | orchestrator | 2026-02-15 04:31:30.872184 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-02-15 04:31:30.872195 | orchestrator | Sunday 15 February 2026 04:30:22 +0000 (0:00:00.072) 0:01:38.847 ******* 2026-02-15 04:31:30.872206 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:31:30.872216 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:31:30.872230 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:31:30.872242 | orchestrator | 2026-02-15 04:31:30.872255 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-02-15 04:31:30.872268 | orchestrator | Sunday 15 February 2026 04:30:35 +0000 (0:00:13.167) 0:01:52.014 ******* 2026-02-15 04:31:30.872280 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:31:30.872294 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:31:30.872306 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:31:30.872319 | orchestrator | 2026-02-15 04:31:30.872331 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-02-15 04:31:30.872344 | orchestrator | Sunday 15 February 2026 04:30:45 +0000 (0:00:10.436) 0:02:02.451 ******* 2026-02-15 04:31:30.872357 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:31:30.872369 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:31:30.872382 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:31:30.872394 | orchestrator | 2026-02-15 04:31:30.872407 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-02-15 04:31:30.872420 | orchestrator | Sunday 15 February 2026 04:30:56 +0000 (0:00:10.607) 0:02:13.058 ******* 2026-02-15 04:31:30.872432 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:31:30.872445 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:31:30.872457 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:31:30.872469 | orchestrator | 2026-02-15 04:31:30.872482 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-02-15 04:31:30.872495 | orchestrator | Sunday 15 February 2026 04:31:02 +0000 (0:00:05.608) 0:02:18.667 ******* 2026-02-15 04:31:30.872507 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:31:30.872520 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:31:30.872532 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:31:30.872545 | orchestrator | 2026-02-15 04:31:30.872557 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-02-15 04:31:30.872570 | orchestrator | Sunday 15 February 2026 04:31:12 +0000 (0:00:10.740) 0:02:29.407 ******* 2026-02-15 04:31:30.872583 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:31:30.872596 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:31:30.872608 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:31:30.872618 | orchestrator | 2026-02-15 04:31:30.872629 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-02-15 04:31:30.872640 | orchestrator | Sunday 15 February 2026 04:31:23 +0000 (0:00:10.309) 0:02:39.716 ******* 2026-02-15 04:31:30.872651 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:31:30.872662 | orchestrator | 2026-02-15 04:31:30.872673 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:31:30.872685 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 04:31:30.872707 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 04:31:30.872718 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 04:31:30.872729 | orchestrator | 2026-02-15 04:31:30.872740 | orchestrator | 2026-02-15 04:31:30.872765 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:31:30.872776 | orchestrator | Sunday 15 February 2026 04:31:30 +0000 (0:00:07.370) 0:02:47.087 ******* 2026-02-15 04:31:30.872787 | orchestrator | =============================================================================== 2026-02-15 04:31:30.872798 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.37s 2026-02-15 04:31:30.872809 | orchestrator | designate : Copying over designate.conf -------------------------------- 14.41s 2026-02-15 04:31:30.872838 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.17s 2026-02-15 04:31:30.872850 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.74s 2026-02-15 04:31:30.872861 | orchestrator | designate : Restart designate-central container ------------------------ 10.61s 2026-02-15 04:31:30.872872 | orchestrator | designate : Restart designate-api container ---------------------------- 10.44s 2026-02-15 04:31:30.872883 | orchestrator | designate : Restart designate-worker container ------------------------- 10.31s 2026-02-15 04:31:30.872894 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.37s 2026-02-15 04:31:30.872905 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.54s 2026-02-15 04:31:30.872916 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.43s 2026-02-15 04:31:30.872926 | orchestrator | designate : Copying over config.json files for services ----------------- 6.25s 2026-02-15 04:31:30.872937 | orchestrator | designate : Restart designate-producer container ------------------------ 5.61s 2026-02-15 04:31:30.872948 | orchestrator | designate : Check designate containers ---------------------------------- 5.05s 2026-02-15 04:31:30.872959 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.06s 2026-02-15 04:31:30.872970 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.76s 2026-02-15 04:31:30.872980 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 3.57s 2026-02-15 04:31:30.873012 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.42s 2026-02-15 04:31:30.873023 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.35s 2026-02-15 04:31:30.873034 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.19s 2026-02-15 04:31:30.873045 | orchestrator | designate : Ensuring config directories exist --------------------------- 2.90s 2026-02-15 04:31:33.273328 | orchestrator | 2026-02-15 04:31:33 | INFO  | Task 4cdb025a-4479-4ef1-8e31-ca054ec9d220 (octavia) was prepared for execution. 2026-02-15 04:31:33.273457 | orchestrator | 2026-02-15 04:31:33 | INFO  | It takes a moment until task 4cdb025a-4479-4ef1-8e31-ca054ec9d220 (octavia) has been started and output is visible here. 2026-02-15 04:33:42.716401 | orchestrator | 2026-02-15 04:33:42.716505 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:33:42.716524 | orchestrator | 2026-02-15 04:33:42.716538 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:33:42.716552 | orchestrator | Sunday 15 February 2026 04:31:37 +0000 (0:00:00.259) 0:00:00.259 ******* 2026-02-15 04:33:42.716566 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:33:42.716581 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:33:42.716595 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:33:42.716609 | orchestrator | 2026-02-15 04:33:42.716624 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:33:42.716653 | orchestrator | Sunday 15 February 2026 04:31:37 +0000 (0:00:00.303) 0:00:00.562 ******* 2026-02-15 04:33:42.716662 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-02-15 04:33:42.716671 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-02-15 04:33:42.716679 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-02-15 04:33:42.716687 | orchestrator | 2026-02-15 04:33:42.716695 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-02-15 04:33:42.716702 | orchestrator | 2026-02-15 04:33:42.716711 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-15 04:33:42.716718 | orchestrator | Sunday 15 February 2026 04:31:38 +0000 (0:00:00.458) 0:00:01.021 ******* 2026-02-15 04:33:42.716727 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:33:42.716735 | orchestrator | 2026-02-15 04:33:42.716743 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-02-15 04:33:42.716751 | orchestrator | Sunday 15 February 2026 04:31:38 +0000 (0:00:00.569) 0:00:01.590 ******* 2026-02-15 04:33:42.716759 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-02-15 04:33:42.716767 | orchestrator | 2026-02-15 04:33:42.716775 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-02-15 04:33:42.716783 | orchestrator | Sunday 15 February 2026 04:31:42 +0000 (0:00:03.637) 0:00:05.227 ******* 2026-02-15 04:33:42.716790 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-02-15 04:33:42.716798 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-02-15 04:33:42.716806 | orchestrator | 2026-02-15 04:33:42.716814 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-02-15 04:33:42.716821 | orchestrator | Sunday 15 February 2026 04:31:49 +0000 (0:00:06.752) 0:00:11.980 ******* 2026-02-15 04:33:42.716829 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:33:42.716838 | orchestrator | 2026-02-15 04:33:42.716845 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-02-15 04:33:42.716876 | orchestrator | Sunday 15 February 2026 04:31:52 +0000 (0:00:03.431) 0:00:15.411 ******* 2026-02-15 04:33:42.716891 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:33:42.716905 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-15 04:33:42.716943 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-02-15 04:33:42.716958 | orchestrator | 2026-02-15 04:33:42.716972 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-02-15 04:33:42.716987 | orchestrator | Sunday 15 February 2026 04:32:01 +0000 (0:00:08.574) 0:00:23.986 ******* 2026-02-15 04:33:42.717002 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:33:42.717016 | orchestrator | 2026-02-15 04:33:42.717030 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-02-15 04:33:42.717044 | orchestrator | Sunday 15 February 2026 04:32:04 +0000 (0:00:03.407) 0:00:27.393 ******* 2026-02-15 04:33:42.717059 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-15 04:33:42.717074 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-02-15 04:33:42.717088 | orchestrator | 2026-02-15 04:33:42.717102 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-02-15 04:33:42.717111 | orchestrator | Sunday 15 February 2026 04:32:12 +0000 (0:00:07.635) 0:00:35.029 ******* 2026-02-15 04:33:42.717121 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-02-15 04:33:42.717130 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-02-15 04:33:42.717139 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-02-15 04:33:42.717148 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-02-15 04:33:42.717166 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-02-15 04:33:42.717175 | orchestrator | 2026-02-15 04:33:42.717185 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-15 04:33:42.717194 | orchestrator | Sunday 15 February 2026 04:32:28 +0000 (0:00:16.031) 0:00:51.061 ******* 2026-02-15 04:33:42.717202 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:33:42.717210 | orchestrator | 2026-02-15 04:33:42.717218 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-02-15 04:33:42.717226 | orchestrator | Sunday 15 February 2026 04:32:29 +0000 (0:00:00.774) 0:00:51.835 ******* 2026-02-15 04:33:42.717234 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:33:42.717243 | orchestrator | 2026-02-15 04:33:42.717251 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-02-15 04:33:42.717259 | orchestrator | Sunday 15 February 2026 04:32:34 +0000 (0:00:05.164) 0:00:57.000 ******* 2026-02-15 04:33:42.717267 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:33:42.717275 | orchestrator | 2026-02-15 04:33:42.717283 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-15 04:33:42.717306 | orchestrator | Sunday 15 February 2026 04:32:38 +0000 (0:00:03.919) 0:01:00.919 ******* 2026-02-15 04:33:42.717314 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:33:42.717322 | orchestrator | 2026-02-15 04:33:42.717335 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-02-15 04:33:42.717348 | orchestrator | Sunday 15 February 2026 04:32:41 +0000 (0:00:03.266) 0:01:04.186 ******* 2026-02-15 04:33:42.717361 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-15 04:33:42.717373 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-15 04:33:42.717387 | orchestrator | 2026-02-15 04:33:42.717400 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-02-15 04:33:42.717415 | orchestrator | Sunday 15 February 2026 04:32:52 +0000 (0:00:10.927) 0:01:15.113 ******* 2026-02-15 04:33:42.717423 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-02-15 04:33:42.717431 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-02-15 04:33:42.717440 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-02-15 04:33:42.717453 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-02-15 04:33:42.717462 | orchestrator | 2026-02-15 04:33:42.717470 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-02-15 04:33:42.717478 | orchestrator | Sunday 15 February 2026 04:33:08 +0000 (0:00:16.495) 0:01:31.608 ******* 2026-02-15 04:33:42.717486 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:33:42.717494 | orchestrator | 2026-02-15 04:33:42.717502 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-02-15 04:33:42.717510 | orchestrator | Sunday 15 February 2026 04:33:13 +0000 (0:00:04.558) 0:01:36.167 ******* 2026-02-15 04:33:42.717518 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:33:42.717526 | orchestrator | 2026-02-15 04:33:42.717534 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-02-15 04:33:42.717542 | orchestrator | Sunday 15 February 2026 04:33:18 +0000 (0:00:05.590) 0:01:41.758 ******* 2026-02-15 04:33:42.717550 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:33:42.717558 | orchestrator | 2026-02-15 04:33:42.717566 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-02-15 04:33:42.717574 | orchestrator | Sunday 15 February 2026 04:33:19 +0000 (0:00:00.225) 0:01:41.984 ******* 2026-02-15 04:33:42.717589 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:33:42.717597 | orchestrator | 2026-02-15 04:33:42.717605 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-15 04:33:42.717618 | orchestrator | Sunday 15 February 2026 04:33:23 +0000 (0:00:04.676) 0:01:46.661 ******* 2026-02-15 04:33:42.717627 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:33:42.717635 | orchestrator | 2026-02-15 04:33:42.717643 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-02-15 04:33:42.717651 | orchestrator | Sunday 15 February 2026 04:33:24 +0000 (0:00:01.104) 0:01:47.765 ******* 2026-02-15 04:33:42.717659 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:33:42.717667 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:33:42.717675 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:33:42.717685 | orchestrator | 2026-02-15 04:33:42.717699 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-02-15 04:33:42.717712 | orchestrator | Sunday 15 February 2026 04:33:30 +0000 (0:00:05.228) 0:01:52.993 ******* 2026-02-15 04:33:42.717724 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:33:42.717738 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:33:42.717751 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:33:42.717766 | orchestrator | 2026-02-15 04:33:42.717779 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-02-15 04:33:42.717794 | orchestrator | Sunday 15 February 2026 04:33:34 +0000 (0:00:04.756) 0:01:57.750 ******* 2026-02-15 04:33:42.717802 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:33:42.717810 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:33:42.717818 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:33:42.717826 | orchestrator | 2026-02-15 04:33:42.717834 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-02-15 04:33:42.717841 | orchestrator | Sunday 15 February 2026 04:33:36 +0000 (0:00:01.045) 0:01:58.795 ******* 2026-02-15 04:33:42.717849 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:33:42.717857 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:33:42.717865 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:33:42.717872 | orchestrator | 2026-02-15 04:33:42.717880 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-02-15 04:33:42.717888 | orchestrator | Sunday 15 February 2026 04:33:37 +0000 (0:00:01.925) 0:02:00.721 ******* 2026-02-15 04:33:42.717896 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:33:42.717904 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:33:42.717911 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:33:42.717939 | orchestrator | 2026-02-15 04:33:42.717949 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-02-15 04:33:42.717956 | orchestrator | Sunday 15 February 2026 04:33:39 +0000 (0:00:01.314) 0:02:02.036 ******* 2026-02-15 04:33:42.717964 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:33:42.717972 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:33:42.718098 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:33:42.718114 | orchestrator | 2026-02-15 04:33:42.718128 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-02-15 04:33:42.718141 | orchestrator | Sunday 15 February 2026 04:33:40 +0000 (0:00:01.196) 0:02:03.233 ******* 2026-02-15 04:33:42.718183 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:33:42.718192 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:33:42.718200 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:33:42.718208 | orchestrator | 2026-02-15 04:33:42.718225 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-02-15 04:34:10.061391 | orchestrator | Sunday 15 February 2026 04:33:42 +0000 (0:00:02.254) 0:02:05.487 ******* 2026-02-15 04:34:10.061488 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:34:10.061501 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:34:10.061510 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:34:10.061518 | orchestrator | 2026-02-15 04:34:10.061528 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-02-15 04:34:10.061556 | orchestrator | Sunday 15 February 2026 04:33:44 +0000 (0:00:01.456) 0:02:06.944 ******* 2026-02-15 04:34:10.061569 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:34:10.061584 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:34:10.061597 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:34:10.061609 | orchestrator | 2026-02-15 04:34:10.061623 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-02-15 04:34:10.061637 | orchestrator | Sunday 15 February 2026 04:33:44 +0000 (0:00:00.646) 0:02:07.591 ******* 2026-02-15 04:34:10.061652 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:34:10.061666 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:34:10.061679 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:34:10.061692 | orchestrator | 2026-02-15 04:34:10.061700 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-15 04:34:10.061708 | orchestrator | Sunday 15 February 2026 04:33:47 +0000 (0:00:03.050) 0:02:10.641 ******* 2026-02-15 04:34:10.061717 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:34:10.061725 | orchestrator | 2026-02-15 04:34:10.061733 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-02-15 04:34:10.061741 | orchestrator | Sunday 15 February 2026 04:33:48 +0000 (0:00:00.551) 0:02:11.193 ******* 2026-02-15 04:34:10.061749 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:34:10.061756 | orchestrator | 2026-02-15 04:34:10.061764 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-02-15 04:34:10.061772 | orchestrator | Sunday 15 February 2026 04:33:52 +0000 (0:00:04.309) 0:02:15.502 ******* 2026-02-15 04:34:10.061780 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:34:10.061788 | orchestrator | 2026-02-15 04:34:10.061796 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-02-15 04:34:10.061803 | orchestrator | Sunday 15 February 2026 04:33:56 +0000 (0:00:03.360) 0:02:18.863 ******* 2026-02-15 04:34:10.061812 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-02-15 04:34:10.061820 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-02-15 04:34:10.061828 | orchestrator | 2026-02-15 04:34:10.061836 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-02-15 04:34:10.061844 | orchestrator | Sunday 15 February 2026 04:34:03 +0000 (0:00:07.804) 0:02:26.667 ******* 2026-02-15 04:34:10.061852 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:34:10.061859 | orchestrator | 2026-02-15 04:34:10.061867 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-02-15 04:34:10.061889 | orchestrator | Sunday 15 February 2026 04:34:07 +0000 (0:00:03.653) 0:02:30.321 ******* 2026-02-15 04:34:10.061897 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:34:10.061946 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:34:10.061958 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:34:10.061967 | orchestrator | 2026-02-15 04:34:10.061976 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-02-15 04:34:10.061986 | orchestrator | Sunday 15 February 2026 04:34:08 +0000 (0:00:00.520) 0:02:30.841 ******* 2026-02-15 04:34:10.061998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:10.062082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:10.062093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:10.062103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:10.062118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:10.062126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:10.062136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:10.062152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:10.062167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:11.617755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:11.617867 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:11.617957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:11.617977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:11.618014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:11.618129 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:11.618142 | orchestrator | 2026-02-15 04:34:11.618157 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-02-15 04:34:11.618170 | orchestrator | Sunday 15 February 2026 04:34:10 +0000 (0:00:02.427) 0:02:33.269 ******* 2026-02-15 04:34:11.618181 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:34:11.618193 | orchestrator | 2026-02-15 04:34:11.618204 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-02-15 04:34:11.618215 | orchestrator | Sunday 15 February 2026 04:34:10 +0000 (0:00:00.135) 0:02:33.405 ******* 2026-02-15 04:34:11.618225 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:34:11.618257 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:34:11.618269 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:34:11.618279 | orchestrator | 2026-02-15 04:34:11.618290 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-02-15 04:34:11.618301 | orchestrator | Sunday 15 February 2026 04:34:10 +0000 (0:00:00.356) 0:02:33.761 ******* 2026-02-15 04:34:11.618314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 04:34:11.618337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 04:34:11.618350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 04:34:11.618375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 04:34:11.618386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:34:11.618398 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:34:11.618421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 04:34:16.487478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 04:34:16.487607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 04:34:16.487626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 04:34:16.487683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:34:16.487698 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:34:16.487713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 04:34:16.487726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 04:34:16.487757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 04:34:16.487769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 04:34:16.487828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:34:16.487841 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:34:16.487853 | orchestrator | 2026-02-15 04:34:16.487866 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-15 04:34:16.487878 | orchestrator | Sunday 15 February 2026 04:34:11 +0000 (0:00:00.730) 0:02:34.491 ******* 2026-02-15 04:34:16.487890 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:34:16.487901 | orchestrator | 2026-02-15 04:34:16.487942 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-02-15 04:34:16.487953 | orchestrator | Sunday 15 February 2026 04:34:12 +0000 (0:00:00.807) 0:02:35.299 ******* 2026-02-15 04:34:16.487964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:16.487977 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:16.487999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:18.052445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:18.052555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:18.052572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:18.052586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:18.052600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:18.052611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:18.052642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:18.052686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:18.052699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:18.052789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:18.052802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:18.052813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:18.052826 | orchestrator | 2026-02-15 04:34:18.052840 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-02-15 04:34:18.052854 | orchestrator | Sunday 15 February 2026 04:34:17 +0000 (0:00:04.917) 0:02:40.217 ******* 2026-02-15 04:34:18.052878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 04:34:18.165070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 04:34:18.165181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 04:34:18.165198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 04:34:18.165211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:34:18.165225 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:34:18.165240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 04:34:18.165274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 04:34:18.165325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 04:34:18.165339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 04:34:18.165350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:34:18.165362 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:34:18.165373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 04:34:18.165385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 04:34:18.165404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 04:34:18.165430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 04:34:18.967523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:34:18.967610 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:34:18.967622 | orchestrator | 2026-02-15 04:34:18.967631 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-02-15 04:34:18.967641 | orchestrator | Sunday 15 February 2026 04:34:18 +0000 (0:00:00.728) 0:02:40.945 ******* 2026-02-15 04:34:18.967650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 04:34:18.967660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 04:34:18.967668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 04:34:18.967696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 04:34:18.967730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:34:18.967739 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:34:18.967747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 04:34:18.967755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 04:34:18.967763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 04:34:18.967776 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 04:34:18.967784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:34:18.967791 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:34:18.967808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 04:34:23.582522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 04:34:23.583463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 04:34:23.583506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 04:34:23.583559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 04:34:23.583584 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:34:23.583648 | orchestrator | 2026-02-15 04:34:23.583672 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-02-15 04:34:23.583688 | orchestrator | Sunday 15 February 2026 04:34:19 +0000 (0:00:01.274) 0:02:42.220 ******* 2026-02-15 04:34:23.583714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:23.583751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:23.583764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:23.583776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:23.583799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:23.583810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:23.583822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:23.583847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:39.263110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:39.263199 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:39.263222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:39.263227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:39.263232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:39.263247 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:39.263261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:39.263268 | orchestrator | 2026-02-15 04:34:39.263302 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-02-15 04:34:39.263308 | orchestrator | Sunday 15 February 2026 04:34:24 +0000 (0:00:05.125) 0:02:47.345 ******* 2026-02-15 04:34:39.263312 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-15 04:34:39.263318 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-15 04:34:39.263322 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-02-15 04:34:39.263326 | orchestrator | 2026-02-15 04:34:39.263331 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-02-15 04:34:39.263338 | orchestrator | Sunday 15 February 2026 04:34:26 +0000 (0:00:01.613) 0:02:48.959 ******* 2026-02-15 04:34:39.263344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:39.263350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:39.263357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:34:39.263366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:54.617482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:54.617621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:34:54.617640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:54.617654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:54.617666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:54.617693 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:54.617725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:54.617738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:34:54.617759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:54.617772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:54.617784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:34:54.617797 | orchestrator | 2026-02-15 04:34:54.617811 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-02-15 04:34:54.617824 | orchestrator | Sunday 15 February 2026 04:34:42 +0000 (0:00:16.482) 0:03:05.441 ******* 2026-02-15 04:34:54.617836 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:34:54.617848 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:34:54.617859 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:34:54.617871 | orchestrator | 2026-02-15 04:34:54.617882 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-02-15 04:34:54.617942 | orchestrator | Sunday 15 February 2026 04:34:44 +0000 (0:00:01.713) 0:03:07.155 ******* 2026-02-15 04:34:54.617955 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-15 04:34:54.617966 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-15 04:34:54.617977 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-15 04:34:54.617988 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-15 04:34:54.617998 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-15 04:34:54.618109 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-15 04:34:54.618127 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-15 04:34:54.618140 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-15 04:34:54.618152 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-15 04:34:54.618165 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-15 04:34:54.618178 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-15 04:34:54.618200 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-15 04:34:54.618214 | orchestrator | 2026-02-15 04:34:54.618226 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-02-15 04:34:54.618239 | orchestrator | Sunday 15 February 2026 04:34:49 +0000 (0:00:05.112) 0:03:12.267 ******* 2026-02-15 04:34:54.618251 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-15 04:34:54.618264 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-15 04:34:54.618287 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-15 04:35:02.983537 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-15 04:35:02.983648 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-15 04:35:02.983664 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-15 04:35:02.983676 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-15 04:35:02.983688 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-15 04:35:02.983699 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-15 04:35:02.983710 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-15 04:35:02.983721 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-15 04:35:02.983732 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-15 04:35:02.983743 | orchestrator | 2026-02-15 04:35:02.983756 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-02-15 04:35:02.983768 | orchestrator | Sunday 15 February 2026 04:34:54 +0000 (0:00:05.125) 0:03:17.393 ******* 2026-02-15 04:35:02.983779 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-02-15 04:35:02.983790 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-02-15 04:35:02.983801 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-02-15 04:35:02.983811 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-02-15 04:35:02.983823 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-02-15 04:35:02.983834 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-02-15 04:35:02.983845 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-02-15 04:35:02.983856 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-02-15 04:35:02.983867 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-02-15 04:35:02.983878 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-02-15 04:35:02.983973 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-02-15 04:35:02.983990 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-02-15 04:35:02.984001 | orchestrator | 2026-02-15 04:35:02.984013 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-02-15 04:35:02.984025 | orchestrator | Sunday 15 February 2026 04:34:59 +0000 (0:00:05.131) 0:03:22.524 ******* 2026-02-15 04:35:02.984040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:35:02.984100 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:35:02.984143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 04:35:02.984158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:35:02.984173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:35:02.984186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-02-15 04:35:02.984200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:35:02.984229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:35:02.984243 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-02-15 04:35:02.984264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:36:29.695269 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:36:29.695373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-02-15 04:36:29.695385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:36:29.695415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:36:29.695434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-02-15 04:36:29.695441 | orchestrator | 2026-02-15 04:36:29.695449 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-02-15 04:36:29.695458 | orchestrator | Sunday 15 February 2026 04:35:03 +0000 (0:00:04.078) 0:03:26.603 ******* 2026-02-15 04:36:29.695464 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:36:29.695471 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:36:29.695478 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:36:29.695484 | orchestrator | 2026-02-15 04:36:29.695490 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-02-15 04:36:29.695496 | orchestrator | Sunday 15 February 2026 04:35:04 +0000 (0:00:00.543) 0:03:27.146 ******* 2026-02-15 04:36:29.695502 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:36:29.695508 | orchestrator | 2026-02-15 04:36:29.695514 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-02-15 04:36:29.695519 | orchestrator | Sunday 15 February 2026 04:35:06 +0000 (0:00:02.224) 0:03:29.371 ******* 2026-02-15 04:36:29.695526 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:36:29.695532 | orchestrator | 2026-02-15 04:36:29.695538 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-02-15 04:36:29.695545 | orchestrator | Sunday 15 February 2026 04:35:08 +0000 (0:00:02.215) 0:03:31.586 ******* 2026-02-15 04:36:29.695551 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:36:29.695557 | orchestrator | 2026-02-15 04:36:29.695564 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-02-15 04:36:29.695571 | orchestrator | Sunday 15 February 2026 04:35:11 +0000 (0:00:02.371) 0:03:33.957 ******* 2026-02-15 04:36:29.695591 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:36:29.695597 | orchestrator | 2026-02-15 04:36:29.695603 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-02-15 04:36:29.695609 | orchestrator | Sunday 15 February 2026 04:35:13 +0000 (0:00:02.332) 0:03:36.290 ******* 2026-02-15 04:36:29.695614 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:36:29.695620 | orchestrator | 2026-02-15 04:36:29.695626 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-15 04:36:29.695631 | orchestrator | Sunday 15 February 2026 04:35:37 +0000 (0:00:23.632) 0:03:59.922 ******* 2026-02-15 04:36:29.695637 | orchestrator | 2026-02-15 04:36:29.695643 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-15 04:36:29.695648 | orchestrator | Sunday 15 February 2026 04:35:37 +0000 (0:00:00.067) 0:03:59.990 ******* 2026-02-15 04:36:29.695654 | orchestrator | 2026-02-15 04:36:29.695660 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-02-15 04:36:29.695665 | orchestrator | Sunday 15 February 2026 04:35:37 +0000 (0:00:00.067) 0:04:00.058 ******* 2026-02-15 04:36:29.695680 | orchestrator | 2026-02-15 04:36:29.695686 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-02-15 04:36:29.695693 | orchestrator | Sunday 15 February 2026 04:35:37 +0000 (0:00:00.068) 0:04:00.126 ******* 2026-02-15 04:36:29.695698 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:36:29.695704 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:36:29.695710 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:36:29.695717 | orchestrator | 2026-02-15 04:36:29.695722 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-02-15 04:36:29.695728 | orchestrator | Sunday 15 February 2026 04:35:48 +0000 (0:00:11.608) 0:04:11.734 ******* 2026-02-15 04:36:29.695733 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:36:29.695739 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:36:29.695745 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:36:29.695752 | orchestrator | 2026-02-15 04:36:29.695758 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-02-15 04:36:29.695764 | orchestrator | Sunday 15 February 2026 04:35:59 +0000 (0:00:11.024) 0:04:22.759 ******* 2026-02-15 04:36:29.695771 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:36:29.695777 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:36:29.695783 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:36:29.695789 | orchestrator | 2026-02-15 04:36:29.695795 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-02-15 04:36:29.695801 | orchestrator | Sunday 15 February 2026 04:36:10 +0000 (0:00:10.262) 0:04:33.021 ******* 2026-02-15 04:36:29.695807 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:36:29.695813 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:36:29.695819 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:36:29.695826 | orchestrator | 2026-02-15 04:36:29.695832 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-02-15 04:36:29.695839 | orchestrator | Sunday 15 February 2026 04:36:18 +0000 (0:00:08.399) 0:04:41.420 ******* 2026-02-15 04:36:29.695846 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:36:29.695853 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:36:29.695859 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:36:29.695888 | orchestrator | 2026-02-15 04:36:29.695895 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:36:29.695903 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 04:36:29.695911 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 04:36:29.695919 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 04:36:29.695925 | orchestrator | 2026-02-15 04:36:29.695932 | orchestrator | 2026-02-15 04:36:29.695939 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:36:29.695945 | orchestrator | Sunday 15 February 2026 04:36:29 +0000 (0:00:11.038) 0:04:52.459 ******* 2026-02-15 04:36:29.695958 | orchestrator | =============================================================================== 2026-02-15 04:36:29.695965 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 23.63s 2026-02-15 04:36:29.695971 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.50s 2026-02-15 04:36:29.695978 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.48s 2026-02-15 04:36:29.695984 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.03s 2026-02-15 04:36:29.695992 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.61s 2026-02-15 04:36:29.695999 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 11.04s 2026-02-15 04:36:29.696006 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.02s 2026-02-15 04:36:29.696028 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.93s 2026-02-15 04:36:29.696034 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.26s 2026-02-15 04:36:29.696039 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.57s 2026-02-15 04:36:29.696045 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 8.40s 2026-02-15 04:36:29.696051 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.80s 2026-02-15 04:36:29.696057 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.64s 2026-02-15 04:36:29.696062 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.75s 2026-02-15 04:36:29.696076 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.59s 2026-02-15 04:36:30.055927 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.23s 2026-02-15 04:36:30.056039 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.16s 2026-02-15 04:36:30.056053 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.13s 2026-02-15 04:36:30.056065 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.13s 2026-02-15 04:36:30.056076 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.13s 2026-02-15 04:36:32.400351 | orchestrator | 2026-02-15 04:36:32 | INFO  | Task e38751ed-ff4a-4332-b60c-fb169cdfc42e (ceilometer) was prepared for execution. 2026-02-15 04:36:32.400459 | orchestrator | 2026-02-15 04:36:32 | INFO  | It takes a moment until task e38751ed-ff4a-4332-b60c-fb169cdfc42e (ceilometer) has been started and output is visible here. 2026-02-15 04:36:55.695177 | orchestrator | 2026-02-15 04:36:55.695293 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:36:55.695310 | orchestrator | 2026-02-15 04:36:55.695323 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:36:55.695335 | orchestrator | Sunday 15 February 2026 04:36:36 +0000 (0:00:00.259) 0:00:00.259 ******* 2026-02-15 04:36:55.695346 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:36:55.695359 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:36:55.695370 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:36:55.695380 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:36:55.695391 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:36:55.695402 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:36:55.695413 | orchestrator | 2026-02-15 04:36:55.695424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:36:55.695434 | orchestrator | Sunday 15 February 2026 04:36:37 +0000 (0:00:00.693) 0:00:00.953 ******* 2026-02-15 04:36:55.695446 | orchestrator | ok: [testbed-node-0] => (item=enable_ceilometer_True) 2026-02-15 04:36:55.695457 | orchestrator | ok: [testbed-node-1] => (item=enable_ceilometer_True) 2026-02-15 04:36:55.695467 | orchestrator | ok: [testbed-node-2] => (item=enable_ceilometer_True) 2026-02-15 04:36:55.695478 | orchestrator | ok: [testbed-node-3] => (item=enable_ceilometer_True) 2026-02-15 04:36:55.695489 | orchestrator | ok: [testbed-node-4] => (item=enable_ceilometer_True) 2026-02-15 04:36:55.695500 | orchestrator | ok: [testbed-node-5] => (item=enable_ceilometer_True) 2026-02-15 04:36:55.695510 | orchestrator | 2026-02-15 04:36:55.695522 | orchestrator | PLAY [Apply role ceilometer] *************************************************** 2026-02-15 04:36:55.695532 | orchestrator | 2026-02-15 04:36:55.695543 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-15 04:36:55.695554 | orchestrator | Sunday 15 February 2026 04:36:37 +0000 (0:00:00.610) 0:00:01.564 ******* 2026-02-15 04:36:55.695566 | orchestrator | included: /ansible/roles/ceilometer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:36:55.695578 | orchestrator | 2026-02-15 04:36:55.695589 | orchestrator | TASK [service-ks-register : ceilometer | Creating services] ******************** 2026-02-15 04:36:55.695626 | orchestrator | Sunday 15 February 2026 04:36:39 +0000 (0:00:01.179) 0:00:02.744 ******* 2026-02-15 04:36:55.695637 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:36:55.695648 | orchestrator | 2026-02-15 04:36:55.695659 | orchestrator | TASK [service-ks-register : ceilometer | Creating endpoints] ******************* 2026-02-15 04:36:55.695670 | orchestrator | Sunday 15 February 2026 04:36:39 +0000 (0:00:00.127) 0:00:02.871 ******* 2026-02-15 04:36:55.695681 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:36:55.695691 | orchestrator | 2026-02-15 04:36:55.695702 | orchestrator | TASK [service-ks-register : ceilometer | Creating projects] ******************** 2026-02-15 04:36:55.695713 | orchestrator | Sunday 15 February 2026 04:36:39 +0000 (0:00:00.137) 0:00:03.009 ******* 2026-02-15 04:36:55.695724 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:36:55.695734 | orchestrator | 2026-02-15 04:36:55.695759 | orchestrator | TASK [service-ks-register : ceilometer | Creating users] *********************** 2026-02-15 04:36:55.695771 | orchestrator | Sunday 15 February 2026 04:36:42 +0000 (0:00:03.538) 0:00:06.548 ******* 2026-02-15 04:36:55.695781 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:36:55.695792 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service) 2026-02-15 04:36:55.695803 | orchestrator | 2026-02-15 04:36:55.695813 | orchestrator | TASK [service-ks-register : ceilometer | Creating roles] *********************** 2026-02-15 04:36:55.695824 | orchestrator | Sunday 15 February 2026 04:36:46 +0000 (0:00:03.796) 0:00:10.345 ******* 2026-02-15 04:36:55.695835 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:36:55.695846 | orchestrator | 2026-02-15 04:36:55.695857 | orchestrator | TASK [service-ks-register : ceilometer | Granting user roles] ****************** 2026-02-15 04:36:55.695891 | orchestrator | Sunday 15 February 2026 04:36:50 +0000 (0:00:03.321) 0:00:13.666 ******* 2026-02-15 04:36:55.695903 | orchestrator | changed: [testbed-node-0] => (item=ceilometer -> service -> admin) 2026-02-15 04:36:55.695913 | orchestrator | 2026-02-15 04:36:55.695924 | orchestrator | TASK [ceilometer : Associate the ResellerAdmin role and ceilometer user] ******* 2026-02-15 04:36:55.695935 | orchestrator | Sunday 15 February 2026 04:36:54 +0000 (0:00:04.097) 0:00:17.764 ******* 2026-02-15 04:36:55.695945 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:36:55.695956 | orchestrator | 2026-02-15 04:36:55.695967 | orchestrator | TASK [ceilometer : Ensuring config directories exist] ************************** 2026-02-15 04:36:55.695978 | orchestrator | Sunday 15 February 2026 04:36:54 +0000 (0:00:00.143) 0:00:17.908 ******* 2026-02-15 04:36:55.695992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:36:55.696025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:36:55.696038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:36:55.696059 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:36:55.696079 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:36:55.696092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:36:55.696104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:36:55.696123 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:00.351166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:00.351288 | orchestrator | 2026-02-15 04:37:00.351303 | orchestrator | TASK [ceilometer : Check if the folder for custom meter definitions exist] ***** 2026-02-15 04:37:00.351315 | orchestrator | Sunday 15 February 2026 04:36:55 +0000 (0:00:01.433) 0:00:19.342 ******* 2026-02-15 04:37:00.351327 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-15 04:37:00.351338 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:37:00.351348 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 04:37:00.351359 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-15 04:37:00.351368 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-15 04:37:00.351377 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-15 04:37:00.351387 | orchestrator | 2026-02-15 04:37:00.351396 | orchestrator | TASK [ceilometer : Set variable that indicates if we have a folder for custom meter YAML files] *** 2026-02-15 04:37:00.351406 | orchestrator | Sunday 15 February 2026 04:36:57 +0000 (0:00:01.577) 0:00:20.919 ******* 2026-02-15 04:37:00.351416 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:37:00.351427 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:37:00.351437 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:37:00.351447 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:37:00.351458 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:37:00.351468 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:37:00.351478 | orchestrator | 2026-02-15 04:37:00.351489 | orchestrator | TASK [ceilometer : Find all *.yaml files in custom meter definitions folder (if the folder exist)] *** 2026-02-15 04:37:00.351500 | orchestrator | Sunday 15 February 2026 04:36:57 +0000 (0:00:00.618) 0:00:21.537 ******* 2026-02-15 04:37:00.351510 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:00.351520 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:00.351531 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:00.351541 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:00.351551 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:00.351561 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:00.351571 | orchestrator | 2026-02-15 04:37:00.351582 | orchestrator | TASK [ceilometer : Set the variable that control the copy of custom meter definitions] *** 2026-02-15 04:37:00.351593 | orchestrator | Sunday 15 February 2026 04:36:58 +0000 (0:00:00.771) 0:00:22.309 ******* 2026-02-15 04:37:00.351603 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:37:00.351612 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:37:00.351623 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:37:00.351633 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:37:00.351642 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:37:00.351652 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:37:00.351662 | orchestrator | 2026-02-15 04:37:00.351673 | orchestrator | TASK [ceilometer : Create default folder for custom meter definitions] ********* 2026-02-15 04:37:00.351683 | orchestrator | Sunday 15 February 2026 04:36:59 +0000 (0:00:00.596) 0:00:22.906 ******* 2026-02-15 04:37:00.351695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:00.351716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:00.351730 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:00.351762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:00.351775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:00.351787 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:00.351848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:00.351903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:00.351915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:00.351934 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:00.351945 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:00.351955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:00.351966 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:00.351986 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:05.008784 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:05.008952 | orchestrator | 2026-02-15 04:37:05.008968 | orchestrator | TASK [ceilometer : Copying custom meter definitions to Ceilometer] ************* 2026-02-15 04:37:05.008977 | orchestrator | Sunday 15 February 2026 04:37:00 +0000 (0:00:01.094) 0:00:24.000 ******* 2026-02-15 04:37:05.008987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:05.008998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:05.009007 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:05.009042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:05.009075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:05.009083 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:05.009090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:05.009097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:05.009104 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:05.009125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:05.009134 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:05.009141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:05.009152 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:05.009159 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:05.009172 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:05.009179 | orchestrator | 2026-02-15 04:37:05.009188 | orchestrator | TASK [ceilometer : Check if the folder ["/opt/configuration/environments/kolla/files/overlays/ceilometer/pollsters.d"] for dynamic pollsters definitions exist] *** 2026-02-15 04:37:05.009196 | orchestrator | Sunday 15 February 2026 04:37:01 +0000 (0:00:00.872) 0:00:24.872 ******* 2026-02-15 04:37:05.009203 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:37:05.009209 | orchestrator | 2026-02-15 04:37:05.009216 | orchestrator | TASK [ceilometer : Set the variable that control the copy of dynamic pollsters definitions] *** 2026-02-15 04:37:05.009224 | orchestrator | Sunday 15 February 2026 04:37:01 +0000 (0:00:00.700) 0:00:25.573 ******* 2026-02-15 04:37:05.009231 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:37:05.009239 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:37:05.009245 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:37:05.009252 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:37:05.009259 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:37:05.009265 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:37:05.009272 | orchestrator | 2026-02-15 04:37:05.009279 | orchestrator | TASK [ceilometer : Clean default folder for dynamic pollsters definitions] ***** 2026-02-15 04:37:05.009286 | orchestrator | Sunday 15 February 2026 04:37:02 +0000 (0:00:00.770) 0:00:26.343 ******* 2026-02-15 04:37:05.009298 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:37:05.009309 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:37:05.009319 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:37:05.009330 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:37:05.009342 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:37:05.009354 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:37:05.009366 | orchestrator | 2026-02-15 04:37:05.009379 | orchestrator | TASK [ceilometer : Create default folder for dynamic pollsters definitions] **** 2026-02-15 04:37:05.009389 | orchestrator | Sunday 15 February 2026 04:37:03 +0000 (0:00:00.919) 0:00:27.263 ******* 2026-02-15 04:37:05.009397 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:05.009405 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:05.009413 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:05.009421 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:05.009429 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:05.009437 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:05.009445 | orchestrator | 2026-02-15 04:37:05.009453 | orchestrator | TASK [ceilometer : Copying dynamic pollsters definitions] ********************** 2026-02-15 04:37:05.009461 | orchestrator | Sunday 15 February 2026 04:37:04 +0000 (0:00:00.795) 0:00:28.059 ******* 2026-02-15 04:37:05.009470 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:05.009476 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:05.009483 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:05.009489 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:05.009495 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:05.009502 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:05.009509 | orchestrator | 2026-02-15 04:37:09.904998 | orchestrator | TASK [ceilometer : Check if custom polling.yaml exists] ************************ 2026-02-15 04:37:09.905142 | orchestrator | Sunday 15 February 2026 04:37:05 +0000 (0:00:00.606) 0:00:28.665 ******* 2026-02-15 04:37:09.905172 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:37:09.905244 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-15 04:37:09.905264 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-15 04:37:09.905283 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 04:37:09.905301 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-15 04:37:09.905352 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-15 04:37:09.905370 | orchestrator | 2026-02-15 04:37:09.905389 | orchestrator | TASK [ceilometer : Copying over polling.yaml] ********************************** 2026-02-15 04:37:09.905405 | orchestrator | Sunday 15 February 2026 04:37:06 +0000 (0:00:01.423) 0:00:30.089 ******* 2026-02-15 04:37:09.905426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:09.905465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:09.905485 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:09.905504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:09.905521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:09.905538 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:09.905555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:09.905601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:09.905634 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:09.905652 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:09.905669 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:09.905695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:09.905713 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:09.905730 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:09.905748 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:09.905766 | orchestrator | 2026-02-15 04:37:09.905779 | orchestrator | TASK [ceilometer : Set ceilometer polling file's path] ************************* 2026-02-15 04:37:09.905792 | orchestrator | Sunday 15 February 2026 04:37:07 +0000 (0:00:00.816) 0:00:30.905 ******* 2026-02-15 04:37:09.905804 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:09.905815 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:09.905827 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:09.905838 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:09.905849 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:09.905937 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:09.905951 | orchestrator | 2026-02-15 04:37:09.905962 | orchestrator | TASK [ceilometer : Check custom gnocchi_resources.yaml exists] ***************** 2026-02-15 04:37:09.905974 | orchestrator | Sunday 15 February 2026 04:37:08 +0000 (0:00:00.819) 0:00:31.725 ******* 2026-02-15 04:37:09.905985 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:37:09.905997 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-15 04:37:09.906009 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-15 04:37:09.906084 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-15 04:37:09.906104 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 04:37:09.906113 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-15 04:37:09.906121 | orchestrator | 2026-02-15 04:37:09.906130 | orchestrator | TASK [ceilometer : Copying over gnocchi_resources.yaml] ************************ 2026-02-15 04:37:09.906137 | orchestrator | Sunday 15 February 2026 04:37:09 +0000 (0:00:01.337) 0:00:33.062 ******* 2026-02-15 04:37:09.906158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:15.744081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:15.744217 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:15.744256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:15.744271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:15.744284 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:15.744296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:15.744307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:15.744342 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:15.744354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:15.744367 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:15.744396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:15.744408 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:15.744425 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:15.744436 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:15.744448 | orchestrator | 2026-02-15 04:37:15.744460 | orchestrator | TASK [ceilometer : Set ceilometer gnocchi_resources file's path] *************** 2026-02-15 04:37:15.744473 | orchestrator | Sunday 15 February 2026 04:37:10 +0000 (0:00:01.264) 0:00:34.326 ******* 2026-02-15 04:37:15.744484 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:15.744495 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:15.744505 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:15.744516 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:15.744527 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:15.744538 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:15.744549 | orchestrator | 2026-02-15 04:37:15.744560 | orchestrator | TASK [ceilometer : Check if policies shall be overwritten] ********************* 2026-02-15 04:37:15.744572 | orchestrator | Sunday 15 February 2026 04:37:11 +0000 (0:00:00.786) 0:00:35.113 ******* 2026-02-15 04:37:15.744583 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:15.744594 | orchestrator | 2026-02-15 04:37:15.744606 | orchestrator | TASK [ceilometer : Set ceilometer policy file] ********************************* 2026-02-15 04:37:15.744632 | orchestrator | Sunday 15 February 2026 04:37:11 +0000 (0:00:00.148) 0:00:35.261 ******* 2026-02-15 04:37:15.744665 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:15.744678 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:15.744691 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:15.744704 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:15.744716 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:15.744728 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:15.744740 | orchestrator | 2026-02-15 04:37:15.744753 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-15 04:37:15.744765 | orchestrator | Sunday 15 February 2026 04:37:12 +0000 (0:00:00.607) 0:00:35.868 ******* 2026-02-15 04:37:15.744779 | orchestrator | included: /ansible/roles/ceilometer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:37:15.744793 | orchestrator | 2026-02-15 04:37:15.744805 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over extra CA certificates] ***** 2026-02-15 04:37:15.744818 | orchestrator | Sunday 15 February 2026 04:37:13 +0000 (0:00:01.269) 0:00:37.138 ******* 2026-02-15 04:37:15.744832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:15.744853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:16.253427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:16.253533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:16.253547 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:16.253572 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:16.253579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:16.253584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:16.253600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:16.253605 | orchestrator | 2026-02-15 04:37:16.253610 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS certificate] *** 2026-02-15 04:37:16.253615 | orchestrator | Sunday 15 February 2026 04:37:15 +0000 (0:00:02.260) 0:00:39.398 ******* 2026-02-15 04:37:16.253623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:16.253632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:16.253636 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:16.253641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:16.253645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:16.253649 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:16.253653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:16.253662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:18.060020 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:18.060144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:18.060194 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:18.060214 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:18.060232 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:18.060248 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:18.060290 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:18.060321 | orchestrator | 2026-02-15 04:37:18.060334 | orchestrator | TASK [service-cert-copy : ceilometer | Copying over backend internal TLS key] *** 2026-02-15 04:37:18.060345 | orchestrator | Sunday 15 February 2026 04:37:16 +0000 (0:00:00.860) 0:00:40.258 ******* 2026-02-15 04:37:18.060356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:18.060368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:18.060399 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:18.060426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:18.060438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:18.060448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:18.060458 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:18.060469 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:18.060479 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:18.060489 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:18.060499 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:18.060511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:18.060522 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:18.060545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:25.488564 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:25.488678 | orchestrator | 2026-02-15 04:37:25.488695 | orchestrator | TASK [ceilometer : Copying over config.json files for services] **************** 2026-02-15 04:37:25.488725 | orchestrator | Sunday 15 February 2026 04:37:18 +0000 (0:00:01.449) 0:00:41.707 ******* 2026-02-15 04:37:25.488740 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:25.488756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:25.488767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:25.488779 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:25.488793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:25.488849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:25.488958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:25.488973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:25.488985 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:25.488996 | orchestrator | 2026-02-15 04:37:25.489008 | orchestrator | TASK [ceilometer : Copying over ceilometer.conf] ******************************* 2026-02-15 04:37:25.489019 | orchestrator | Sunday 15 February 2026 04:37:20 +0000 (0:00:02.528) 0:00:44.235 ******* 2026-02-15 04:37:25.489031 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:25.489043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:25.489070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:34.812624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:34.812744 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:34.812763 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:34.812776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:34.812813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:34.812827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:34.812839 | orchestrator | 2026-02-15 04:37:34.812853 | orchestrator | TASK [ceilometer : Check custom event_definitions.yaml exists] ***************** 2026-02-15 04:37:34.812916 | orchestrator | Sunday 15 February 2026 04:37:25 +0000 (0:00:04.907) 0:00:49.143 ******* 2026-02-15 04:37:34.812945 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:37:34.812965 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-15 04:37:34.812977 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 04:37:34.812988 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-15 04:37:34.812999 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-15 04:37:34.813010 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-15 04:37:34.813021 | orchestrator | 2026-02-15 04:37:34.813032 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml] ************************ 2026-02-15 04:37:34.813044 | orchestrator | Sunday 15 February 2026 04:37:26 +0000 (0:00:01.502) 0:00:50.646 ******* 2026-02-15 04:37:34.813054 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:34.813065 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:34.813076 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:34.813087 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:34.813098 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:34.813109 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:34.813120 | orchestrator | 2026-02-15 04:37:34.813131 | orchestrator | TASK [ceilometer : Copying over event_definitions.yaml for notification service] *** 2026-02-15 04:37:34.813145 | orchestrator | Sunday 15 February 2026 04:37:27 +0000 (0:00:00.585) 0:00:51.231 ******* 2026-02-15 04:37:34.813157 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:34.813170 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:34.813182 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:34.813195 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:37:34.813206 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:37:34.813217 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:37:34.813228 | orchestrator | 2026-02-15 04:37:34.813239 | orchestrator | TASK [ceilometer : Copying over event_pipeline.yaml] *************************** 2026-02-15 04:37:34.813250 | orchestrator | Sunday 15 February 2026 04:37:29 +0000 (0:00:01.621) 0:00:52.853 ******* 2026-02-15 04:37:34.813261 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:34.813272 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:34.813283 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:34.813294 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:37:34.813304 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:37:34.813315 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:37:34.813326 | orchestrator | 2026-02-15 04:37:34.813337 | orchestrator | TASK [ceilometer : Check custom pipeline.yaml exists] ************************** 2026-02-15 04:37:34.813348 | orchestrator | Sunday 15 February 2026 04:37:30 +0000 (0:00:01.492) 0:00:54.345 ******* 2026-02-15 04:37:34.813369 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:37:34.813380 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-15 04:37:34.813391 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-15 04:37:34.813401 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 04:37:34.813412 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-15 04:37:34.813423 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-15 04:37:34.813434 | orchestrator | 2026-02-15 04:37:34.813445 | orchestrator | TASK [ceilometer : Copying over custom pipeline.yaml file] ********************* 2026-02-15 04:37:34.813456 | orchestrator | Sunday 15 February 2026 04:37:32 +0000 (0:00:01.585) 0:00:55.931 ******* 2026-02-15 04:37:34.813468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:34.813480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:34.813492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:34.813516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:35.629340 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:35.629458 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:37:35.629475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:35.629487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:35.629497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:37:35.629509 | orchestrator | 2026-02-15 04:37:35.629521 | orchestrator | TASK [ceilometer : Copying over pipeline.yaml file] **************************** 2026-02-15 04:37:35.629545 | orchestrator | Sunday 15 February 2026 04:37:34 +0000 (0:00:02.529) 0:00:58.460 ******* 2026-02-15 04:37:35.629569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:35.629596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:35.629615 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:35.629627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:35.629637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:35.629648 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:35.629658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:35.629669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:35.629679 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:35.629695 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:35.629706 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:35.629722 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:39.061799 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:39.061965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:39.062084 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:39.062103 | orchestrator | 2026-02-15 04:37:39.062119 | orchestrator | TASK [ceilometer : Copying VMware vCenter CA file] ***************************** 2026-02-15 04:37:39.062135 | orchestrator | Sunday 15 February 2026 04:37:35 +0000 (0:00:00.825) 0:00:59.286 ******* 2026-02-15 04:37:39.062149 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:39.062163 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:39.062177 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:39.062191 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:39.062204 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:39.062220 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:39.062234 | orchestrator | 2026-02-15 04:37:39.062248 | orchestrator | TASK [ceilometer : Copying over existing policy file] ************************** 2026-02-15 04:37:39.062263 | orchestrator | Sunday 15 February 2026 04:37:36 +0000 (0:00:00.772) 0:01:00.058 ******* 2026-02-15 04:37:39.062278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:39.062296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:39.062311 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:37:39.062346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:39.062391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:39.062406 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:37:39.062445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}})  2026-02-15 04:37:39.062464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 04:37:39.062478 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:37:39.062493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:39.062508 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:37:39.062523 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:39.062559 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:37:39.062583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}})  2026-02-15 04:37:39.062598 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:37:39.062613 | orchestrator | 2026-02-15 04:37:39.062628 | orchestrator | TASK [ceilometer : Check ceilometer containers] ******************************** 2026-02-15 04:37:39.062642 | orchestrator | Sunday 15 February 2026 04:37:37 +0000 (0:00:00.871) 0:01:00.930 ******* 2026-02-15 04:37:39.062670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:13.577104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:13.577212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-notification', 'value': {'container_name': 'ceilometer_notification', 'group': 'ceilometer-notification', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-notification/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-agent-notification 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:13.577226 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:13.577252 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:13.577282 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ceilometer-compute', 'value': {'container_name': 'ceilometer_compute', 'group': 'ceilometer-compute', 'enabled': True, 'privileged': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', 'nova_libvirt:/var/lib/libvirt', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ceilometer-polling 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:13.577294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:38:13.577319 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:38:13.577329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceilometer-central', 'value': {'container_name': 'ceilometer_central', 'group': 'ceilometer-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130', 'volumes': ['/etc/kolla/ceilometer-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'ceilometer:/var/lib/ceilometer/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}}) 2026-02-15 04:38:13.577339 | orchestrator | 2026-02-15 04:38:13.577350 | orchestrator | TASK [ceilometer : include_tasks] ********************************************** 2026-02-15 04:38:13.577361 | orchestrator | Sunday 15 February 2026 04:37:39 +0000 (0:00:01.783) 0:01:02.713 ******* 2026-02-15 04:38:13.577370 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:38:13.577380 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:38:13.577388 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:38:13.577397 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:38:13.577406 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:38:13.577414 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:38:13.577423 | orchestrator | 2026-02-15 04:38:13.577432 | orchestrator | TASK [ceilometer : Running Ceilometer bootstrap container] ********************* 2026-02-15 04:38:13.577441 | orchestrator | Sunday 15 February 2026 04:37:39 +0000 (0:00:00.639) 0:01:03.352 ******* 2026-02-15 04:38:13.577457 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:38:13.577466 | orchestrator | 2026-02-15 04:38:13.577475 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-15 04:38:13.577484 | orchestrator | Sunday 15 February 2026 04:37:44 +0000 (0:00:04.994) 0:01:08.347 ******* 2026-02-15 04:38:13.577493 | orchestrator | 2026-02-15 04:38:13.577502 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-15 04:38:13.577511 | orchestrator | Sunday 15 February 2026 04:37:44 +0000 (0:00:00.075) 0:01:08.423 ******* 2026-02-15 04:38:13.577520 | orchestrator | 2026-02-15 04:38:13.577535 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-15 04:38:13.577550 | orchestrator | Sunday 15 February 2026 04:37:44 +0000 (0:00:00.074) 0:01:08.497 ******* 2026-02-15 04:38:13.577564 | orchestrator | 2026-02-15 04:38:13.577578 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-15 04:38:13.577592 | orchestrator | Sunday 15 February 2026 04:37:45 +0000 (0:00:00.242) 0:01:08.740 ******* 2026-02-15 04:38:13.577607 | orchestrator | 2026-02-15 04:38:13.577620 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-15 04:38:13.577629 | orchestrator | Sunday 15 February 2026 04:37:45 +0000 (0:00:00.072) 0:01:08.812 ******* 2026-02-15 04:38:13.577637 | orchestrator | 2026-02-15 04:38:13.577646 | orchestrator | TASK [ceilometer : Flush handlers] ********************************************* 2026-02-15 04:38:13.577662 | orchestrator | Sunday 15 February 2026 04:37:45 +0000 (0:00:00.070) 0:01:08.882 ******* 2026-02-15 04:38:13.577673 | orchestrator | 2026-02-15 04:38:13.577683 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-notification container] ******* 2026-02-15 04:38:13.577693 | orchestrator | Sunday 15 February 2026 04:37:45 +0000 (0:00:00.074) 0:01:08.957 ******* 2026-02-15 04:38:13.577703 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:38:13.577712 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:38:13.577723 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:38:13.577732 | orchestrator | 2026-02-15 04:38:13.577742 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-central container] ************ 2026-02-15 04:38:13.577752 | orchestrator | Sunday 15 February 2026 04:37:52 +0000 (0:00:07.579) 0:01:16.536 ******* 2026-02-15 04:38:13.577762 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:38:13.577772 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:38:13.577782 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:38:13.577792 | orchestrator | 2026-02-15 04:38:13.577802 | orchestrator | RUNNING HANDLER [ceilometer : Restart ceilometer-compute container] ************ 2026-02-15 04:38:13.577812 | orchestrator | Sunday 15 February 2026 04:38:02 +0000 (0:00:09.528) 0:01:26.065 ******* 2026-02-15 04:38:13.577822 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:38:13.577832 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:38:13.577842 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:38:13.577852 | orchestrator | 2026-02-15 04:38:13.577906 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:38:13.577918 | orchestrator | testbed-node-0 : ok=29  changed=13  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-02-15 04:38:13.577930 | orchestrator | testbed-node-1 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-15 04:38:13.577948 | orchestrator | testbed-node-2 : ok=23  changed=10  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-15 04:38:14.007768 | orchestrator | testbed-node-3 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-15 04:38:14.007947 | orchestrator | testbed-node-4 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-15 04:38:14.007965 | orchestrator | testbed-node-5 : ok=20  changed=7  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-02-15 04:38:14.008004 | orchestrator | 2026-02-15 04:38:14.008017 | orchestrator | 2026-02-15 04:38:14.008032 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:38:14.008053 | orchestrator | Sunday 15 February 2026 04:38:13 +0000 (0:00:11.157) 0:01:37.222 ******* 2026-02-15 04:38:14.008071 | orchestrator | =============================================================================== 2026-02-15 04:38:14.008089 | orchestrator | ceilometer : Restart ceilometer-compute container ---------------------- 11.16s 2026-02-15 04:38:14.008107 | orchestrator | ceilometer : Restart ceilometer-central container ----------------------- 9.53s 2026-02-15 04:38:14.008126 | orchestrator | ceilometer : Restart ceilometer-notification container ------------------ 7.58s 2026-02-15 04:38:14.008146 | orchestrator | ceilometer : Running Ceilometer bootstrap container --------------------- 4.99s 2026-02-15 04:38:14.008164 | orchestrator | ceilometer : Copying over ceilometer.conf ------------------------------- 4.91s 2026-02-15 04:38:14.008183 | orchestrator | service-ks-register : ceilometer | Granting user roles ------------------ 4.10s 2026-02-15 04:38:14.008195 | orchestrator | service-ks-register : ceilometer | Creating users ----------------------- 3.80s 2026-02-15 04:38:14.008206 | orchestrator | service-ks-register : ceilometer | Creating projects -------------------- 3.54s 2026-02-15 04:38:14.008217 | orchestrator | service-ks-register : ceilometer | Creating roles ----------------------- 3.32s 2026-02-15 04:38:14.008227 | orchestrator | ceilometer : Copying over custom pipeline.yaml file --------------------- 2.53s 2026-02-15 04:38:14.008243 | orchestrator | ceilometer : Copying over config.json files for services ---------------- 2.53s 2026-02-15 04:38:14.008265 | orchestrator | service-cert-copy : ceilometer | Copying over extra CA certificates ----- 2.26s 2026-02-15 04:38:14.008293 | orchestrator | ceilometer : Check ceilometer containers -------------------------------- 1.78s 2026-02-15 04:38:14.008311 | orchestrator | ceilometer : Copying over event_definitions.yaml for notification service --- 1.62s 2026-02-15 04:38:14.008331 | orchestrator | ceilometer : Check custom pipeline.yaml exists -------------------------- 1.59s 2026-02-15 04:38:14.008349 | orchestrator | ceilometer : Check if the folder for custom meter definitions exist ----- 1.58s 2026-02-15 04:38:14.008366 | orchestrator | ceilometer : Check custom event_definitions.yaml exists ----------------- 1.50s 2026-02-15 04:38:14.008381 | orchestrator | ceilometer : Copying over event_pipeline.yaml --------------------------- 1.49s 2026-02-15 04:38:14.008397 | orchestrator | service-cert-copy : ceilometer | Copying over backend internal TLS key --- 1.45s 2026-02-15 04:38:14.008415 | orchestrator | ceilometer : Ensuring config directories exist -------------------------- 1.43s 2026-02-15 04:38:16.410349 | orchestrator | 2026-02-15 04:38:16 | INFO  | Task d200391b-f3f4-4573-9ef7-69b16cd9c8aa (aodh) was prepared for execution. 2026-02-15 04:38:16.410450 | orchestrator | 2026-02-15 04:38:16 | INFO  | It takes a moment until task d200391b-f3f4-4573-9ef7-69b16cd9c8aa (aodh) has been started and output is visible here. 2026-02-15 04:38:49.523357 | orchestrator | 2026-02-15 04:38:49.523474 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:38:49.523493 | orchestrator | 2026-02-15 04:38:49.523506 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:38:49.523521 | orchestrator | Sunday 15 February 2026 04:38:20 +0000 (0:00:00.294) 0:00:00.294 ******* 2026-02-15 04:38:49.523534 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:38:49.523548 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:38:49.523562 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:38:49.523574 | orchestrator | 2026-02-15 04:38:49.523588 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:38:49.523601 | orchestrator | Sunday 15 February 2026 04:38:20 +0000 (0:00:00.335) 0:00:00.630 ******* 2026-02-15 04:38:49.523615 | orchestrator | ok: [testbed-node-0] => (item=enable_aodh_True) 2026-02-15 04:38:49.523628 | orchestrator | ok: [testbed-node-1] => (item=enable_aodh_True) 2026-02-15 04:38:49.523665 | orchestrator | ok: [testbed-node-2] => (item=enable_aodh_True) 2026-02-15 04:38:49.523679 | orchestrator | 2026-02-15 04:38:49.523693 | orchestrator | PLAY [Apply role aodh] ********************************************************* 2026-02-15 04:38:49.523707 | orchestrator | 2026-02-15 04:38:49.523721 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-15 04:38:49.523733 | orchestrator | Sunday 15 February 2026 04:38:21 +0000 (0:00:00.456) 0:00:01.087 ******* 2026-02-15 04:38:49.523747 | orchestrator | included: /ansible/roles/aodh/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:38:49.523761 | orchestrator | 2026-02-15 04:38:49.523775 | orchestrator | TASK [service-ks-register : aodh | Creating services] ************************** 2026-02-15 04:38:49.523789 | orchestrator | Sunday 15 February 2026 04:38:21 +0000 (0:00:00.545) 0:00:01.633 ******* 2026-02-15 04:38:49.523803 | orchestrator | changed: [testbed-node-0] => (item=aodh (alarming)) 2026-02-15 04:38:49.523816 | orchestrator | 2026-02-15 04:38:49.523829 | orchestrator | TASK [service-ks-register : aodh | Creating endpoints] ************************* 2026-02-15 04:38:49.523837 | orchestrator | Sunday 15 February 2026 04:38:25 +0000 (0:00:03.622) 0:00:05.255 ******* 2026-02-15 04:38:49.523845 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api-int.testbed.osism.xyz:8042 -> internal) 2026-02-15 04:38:49.523854 | orchestrator | changed: [testbed-node-0] => (item=aodh -> https://api.testbed.osism.xyz:8042 -> public) 2026-02-15 04:38:49.523917 | orchestrator | 2026-02-15 04:38:49.523927 | orchestrator | TASK [service-ks-register : aodh | Creating projects] ************************** 2026-02-15 04:38:49.523936 | orchestrator | Sunday 15 February 2026 04:38:32 +0000 (0:00:06.835) 0:00:12.090 ******* 2026-02-15 04:38:49.523945 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:38:49.523956 | orchestrator | 2026-02-15 04:38:49.523965 | orchestrator | TASK [service-ks-register : aodh | Creating users] ***************************** 2026-02-15 04:38:49.523974 | orchestrator | Sunday 15 February 2026 04:38:36 +0000 (0:00:03.705) 0:00:15.796 ******* 2026-02-15 04:38:49.523983 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:38:49.523993 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service) 2026-02-15 04:38:49.524002 | orchestrator | 2026-02-15 04:38:49.524011 | orchestrator | TASK [service-ks-register : aodh | Creating roles] ***************************** 2026-02-15 04:38:49.524020 | orchestrator | Sunday 15 February 2026 04:38:40 +0000 (0:00:04.009) 0:00:19.806 ******* 2026-02-15 04:38:49.524029 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:38:49.524038 | orchestrator | 2026-02-15 04:38:49.524047 | orchestrator | TASK [service-ks-register : aodh | Granting user roles] ************************ 2026-02-15 04:38:49.524057 | orchestrator | Sunday 15 February 2026 04:38:43 +0000 (0:00:03.416) 0:00:23.222 ******* 2026-02-15 04:38:49.524066 | orchestrator | changed: [testbed-node-0] => (item=aodh -> service -> admin) 2026-02-15 04:38:49.524074 | orchestrator | 2026-02-15 04:38:49.524083 | orchestrator | TASK [aodh : Ensuring config directories exist] ******************************** 2026-02-15 04:38:49.524093 | orchestrator | Sunday 15 February 2026 04:38:47 +0000 (0:00:03.895) 0:00:27.118 ******* 2026-02-15 04:38:49.524105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:38:49.524143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:38:49.524165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:38:49.524175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:38:49.524187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:38:49.524196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:38:49.524205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:49.524230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:50.817518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:50.817643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:50.817660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:50.817673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:50.817686 | orchestrator | 2026-02-15 04:38:50.817699 | orchestrator | TASK [aodh : Check if policies shall be overwritten] *************************** 2026-02-15 04:38:50.817712 | orchestrator | Sunday 15 February 2026 04:38:49 +0000 (0:00:02.072) 0:00:29.191 ******* 2026-02-15 04:38:50.817723 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:38:50.817735 | orchestrator | 2026-02-15 04:38:50.817747 | orchestrator | TASK [aodh : Set aodh policy file] ********************************************* 2026-02-15 04:38:50.817758 | orchestrator | Sunday 15 February 2026 04:38:49 +0000 (0:00:00.140) 0:00:29.331 ******* 2026-02-15 04:38:50.817768 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:38:50.817779 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:38:50.817790 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:38:50.817801 | orchestrator | 2026-02-15 04:38:50.817812 | orchestrator | TASK [aodh : Copying over existing policy file] ******************************** 2026-02-15 04:38:50.817850 | orchestrator | Sunday 15 February 2026 04:38:50 +0000 (0:00:00.484) 0:00:29.816 ******* 2026-02-15 04:38:50.817959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 04:38:50.818016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 04:38:50.818091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:38:50.818105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 04:38:50.818117 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:38:50.818131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 04:38:50.818144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 04:38:50.818204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:38:50.818234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 04:38:55.837631 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:38:55.837752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 04:38:55.837774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 04:38:55.837792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:38:55.837812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 04:38:55.837922 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:38:55.837947 | orchestrator | 2026-02-15 04:38:55.837967 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-15 04:38:55.837989 | orchestrator | Sunday 15 February 2026 04:38:50 +0000 (0:00:00.676) 0:00:30.492 ******* 2026-02-15 04:38:55.838008 | orchestrator | included: /ansible/roles/aodh/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:38:55.838083 | orchestrator | 2026-02-15 04:38:55.838195 | orchestrator | TASK [service-cert-copy : aodh | Copying over extra CA certificates] *********** 2026-02-15 04:38:55.838224 | orchestrator | Sunday 15 February 2026 04:38:51 +0000 (0:00:00.728) 0:00:31.220 ******* 2026-02-15 04:38:55.838263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:38:55.838315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:38:55.838339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:38:55.838359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:38:55.838394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:38:55.838411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:38:55.838437 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:55.838471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:56.466928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:56.467030 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:56.467046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:56.467083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:38:56.467096 | orchestrator | 2026-02-15 04:38:56.467109 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS certificate] *** 2026-02-15 04:38:56.467122 | orchestrator | Sunday 15 February 2026 04:38:55 +0000 (0:00:04.291) 0:00:35.511 ******* 2026-02-15 04:38:56.467150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 04:38:56.467194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 04:38:56.467224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:38:56.467237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 04:38:56.467257 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:38:56.467269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 04:38:56.467281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 04:38:56.467293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:38:56.467311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 04:38:56.467322 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:38:56.467342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 04:38:57.474144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 04:38:57.474249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:38:57.474260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 04:38:57.474269 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:38:57.474278 | orchestrator | 2026-02-15 04:38:57.474287 | orchestrator | TASK [service-cert-copy : aodh | Copying over backend internal TLS key] ******** 2026-02-15 04:38:57.474296 | orchestrator | Sunday 15 February 2026 04:38:56 +0000 (0:00:00.632) 0:00:36.144 ******* 2026-02-15 04:38:57.474315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 04:38:57.474325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 04:38:57.474333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:38:57.474359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 04:38:57.474368 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:38:57.474376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 04:38:57.474383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 04:38:57.474391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:38:57.474404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 04:38:57.474411 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:38:57.474424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-02-15 04:39:01.699934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 04:39:01.700060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 04:39:01.700079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 04:39:01.700092 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:39:01.700107 | orchestrator | 2026-02-15 04:39:01.700119 | orchestrator | TASK [aodh : Copying over config.json files for services] ********************** 2026-02-15 04:39:01.700133 | orchestrator | Sunday 15 February 2026 04:38:57 +0000 (0:00:01.001) 0:00:37.145 ******* 2026-02-15 04:39:01.700162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:39:01.700177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:39:01.700237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:39:01.700251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:39:01.700263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:39:01.700275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:39:01.700292 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:01.700304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:01.700324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:01.700344 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:10.112241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:10.112356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:10.112373 | orchestrator | 2026-02-15 04:39:10.112389 | orchestrator | TASK [aodh : Copying over aodh.conf] ******************************************* 2026-02-15 04:39:10.112401 | orchestrator | Sunday 15 February 2026 04:39:01 +0000 (0:00:04.226) 0:00:41.372 ******* 2026-02-15 04:39:10.112413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:39:10.112443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:39:10.112476 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:39:10.112506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:39:10.112519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:39:10.112531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:39:10.112542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:10.112560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:10.112580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:10.112591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:10.112611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:15.308355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:15.308468 | orchestrator | 2026-02-15 04:39:15.308486 | orchestrator | TASK [aodh : Copying over wsgi-aodh files for services] ************************ 2026-02-15 04:39:15.308498 | orchestrator | Sunday 15 February 2026 04:39:10 +0000 (0:00:08.410) 0:00:49.783 ******* 2026-02-15 04:39:15.308507 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:39:15.308519 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:39:15.308531 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:39:15.308542 | orchestrator | 2026-02-15 04:39:15.308553 | orchestrator | TASK [aodh : Check aodh containers] ******************************************** 2026-02-15 04:39:15.308566 | orchestrator | Sunday 15 February 2026 04:39:11 +0000 (0:00:01.840) 0:00:51.623 ******* 2026-02-15 04:39:15.308596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:39:15.308631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:39:15.308645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-02-15 04:39:15.308676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:39:15.308689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:39:15.308699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}}) 2026-02-15 04:39:15.308716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:15.308736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:15.308749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:15.308761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:39:15.308780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:40:00.254416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}}) 2026-02-15 04:40:00.254567 | orchestrator | 2026-02-15 04:40:00.254587 | orchestrator | TASK [aodh : include_tasks] **************************************************** 2026-02-15 04:40:00.254602 | orchestrator | Sunday 15 February 2026 04:39:15 +0000 (0:00:03.361) 0:00:54.985 ******* 2026-02-15 04:40:00.254614 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:40:00.254626 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:40:00.254638 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:40:00.254673 | orchestrator | 2026-02-15 04:40:00.254686 | orchestrator | TASK [aodh : Creating aodh database] ******************************************* 2026-02-15 04:40:00.254697 | orchestrator | Sunday 15 February 2026 04:39:15 +0000 (0:00:00.321) 0:00:55.306 ******* 2026-02-15 04:40:00.254708 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:40:00.254719 | orchestrator | 2026-02-15 04:40:00.254730 | orchestrator | TASK [aodh : Creating aodh database user and setting permissions] ************** 2026-02-15 04:40:00.254741 | orchestrator | Sunday 15 February 2026 04:39:17 +0000 (0:00:02.301) 0:00:57.608 ******* 2026-02-15 04:40:00.254753 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:40:00.254764 | orchestrator | 2026-02-15 04:40:00.254775 | orchestrator | TASK [aodh : Running aodh bootstrap container] ********************************* 2026-02-15 04:40:00.254786 | orchestrator | Sunday 15 February 2026 04:39:20 +0000 (0:00:02.390) 0:00:59.998 ******* 2026-02-15 04:40:00.254798 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:40:00.254808 | orchestrator | 2026-02-15 04:40:00.254819 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-15 04:40:00.254846 | orchestrator | Sunday 15 February 2026 04:39:33 +0000 (0:00:13.407) 0:01:13.405 ******* 2026-02-15 04:40:00.254858 | orchestrator | 2026-02-15 04:40:00.254899 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-15 04:40:00.254910 | orchestrator | Sunday 15 February 2026 04:39:33 +0000 (0:00:00.072) 0:01:13.478 ******* 2026-02-15 04:40:00.254921 | orchestrator | 2026-02-15 04:40:00.254932 | orchestrator | TASK [aodh : Flush handlers] *************************************************** 2026-02-15 04:40:00.254946 | orchestrator | Sunday 15 February 2026 04:39:33 +0000 (0:00:00.074) 0:01:13.552 ******* 2026-02-15 04:40:00.254959 | orchestrator | 2026-02-15 04:40:00.254972 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-api container] **************************** 2026-02-15 04:40:00.254992 | orchestrator | Sunday 15 February 2026 04:39:34 +0000 (0:00:00.263) 0:01:13.816 ******* 2026-02-15 04:40:00.255009 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:40:00.255028 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:40:00.255051 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:40:00.255075 | orchestrator | 2026-02-15 04:40:00.255093 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-evaluator container] ********************** 2026-02-15 04:40:00.255112 | orchestrator | Sunday 15 February 2026 04:39:39 +0000 (0:00:05.600) 0:01:19.416 ******* 2026-02-15 04:40:00.255131 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:40:00.255149 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:40:00.255169 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:40:00.255189 | orchestrator | 2026-02-15 04:40:00.255208 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-listener container] *********************** 2026-02-15 04:40:00.255226 | orchestrator | Sunday 15 February 2026 04:39:44 +0000 (0:00:04.850) 0:01:24.267 ******* 2026-02-15 04:40:00.255241 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:40:00.255255 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:40:00.255267 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:40:00.255280 | orchestrator | 2026-02-15 04:40:00.255294 | orchestrator | RUNNING HANDLER [aodh : Restart aodh-notifier container] *********************** 2026-02-15 04:40:00.255305 | orchestrator | Sunday 15 February 2026 04:39:49 +0000 (0:00:05.033) 0:01:29.301 ******* 2026-02-15 04:40:00.255316 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:40:00.255327 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:40:00.255338 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:40:00.255349 | orchestrator | 2026-02-15 04:40:00.255360 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:40:00.255372 | orchestrator | testbed-node-0 : ok=23  changed=17  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 04:40:00.255385 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 04:40:00.255396 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 04:40:00.255419 | orchestrator | 2026-02-15 04:40:00.255430 | orchestrator | 2026-02-15 04:40:00.255441 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:40:00.255452 | orchestrator | Sunday 15 February 2026 04:39:59 +0000 (0:00:10.264) 0:01:39.565 ******* 2026-02-15 04:40:00.255464 | orchestrator | =============================================================================== 2026-02-15 04:40:00.255475 | orchestrator | aodh : Running aodh bootstrap container -------------------------------- 13.41s 2026-02-15 04:40:00.255486 | orchestrator | aodh : Restart aodh-notifier container --------------------------------- 10.26s 2026-02-15 04:40:00.255515 | orchestrator | aodh : Copying over aodh.conf ------------------------------------------- 8.41s 2026-02-15 04:40:00.255527 | orchestrator | service-ks-register : aodh | Creating endpoints ------------------------- 6.84s 2026-02-15 04:40:00.255538 | orchestrator | aodh : Restart aodh-api container --------------------------------------- 5.60s 2026-02-15 04:40:00.255549 | orchestrator | aodh : Restart aodh-listener container ---------------------------------- 5.03s 2026-02-15 04:40:00.255560 | orchestrator | aodh : Restart aodh-evaluator container --------------------------------- 4.85s 2026-02-15 04:40:00.255571 | orchestrator | service-cert-copy : aodh | Copying over extra CA certificates ----------- 4.29s 2026-02-15 04:40:00.255582 | orchestrator | aodh : Copying over config.json files for services ---------------------- 4.23s 2026-02-15 04:40:00.255593 | orchestrator | service-ks-register : aodh | Creating users ----------------------------- 4.01s 2026-02-15 04:40:00.255604 | orchestrator | service-ks-register : aodh | Granting user roles ------------------------ 3.90s 2026-02-15 04:40:00.255615 | orchestrator | service-ks-register : aodh | Creating projects -------------------------- 3.71s 2026-02-15 04:40:00.255626 | orchestrator | service-ks-register : aodh | Creating services -------------------------- 3.62s 2026-02-15 04:40:00.255637 | orchestrator | service-ks-register : aodh | Creating roles ----------------------------- 3.42s 2026-02-15 04:40:00.255648 | orchestrator | aodh : Check aodh containers -------------------------------------------- 3.36s 2026-02-15 04:40:00.255659 | orchestrator | aodh : Creating aodh database user and setting permissions -------------- 2.39s 2026-02-15 04:40:00.255670 | orchestrator | aodh : Creating aodh database ------------------------------------------- 2.30s 2026-02-15 04:40:00.255681 | orchestrator | aodh : Ensuring config directories exist -------------------------------- 2.07s 2026-02-15 04:40:00.255692 | orchestrator | aodh : Copying over wsgi-aodh files for services ------------------------ 1.84s 2026-02-15 04:40:00.255703 | orchestrator | service-cert-copy : aodh | Copying over backend internal TLS key -------- 1.00s 2026-02-15 04:40:02.601601 | orchestrator | 2026-02-15 04:40:02 | INFO  | Task ef3751b5-aa6c-4c1e-8a1d-632e60952085 (kolla-ceph-rgw) was prepared for execution. 2026-02-15 04:40:02.601757 | orchestrator | 2026-02-15 04:40:02 | INFO  | It takes a moment until task ef3751b5-aa6c-4c1e-8a1d-632e60952085 (kolla-ceph-rgw) has been started and output is visible here. 2026-02-15 04:40:38.510128 | orchestrator | 2026-02-15 04:40:38.510249 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:40:38.510265 | orchestrator | 2026-02-15 04:40:38.510278 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:40:38.510289 | orchestrator | Sunday 15 February 2026 04:40:07 +0000 (0:00:00.291) 0:00:00.291 ******* 2026-02-15 04:40:38.510301 | orchestrator | ok: [testbed-manager] 2026-02-15 04:40:38.510313 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:40:38.510324 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:40:38.510335 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:40:38.510346 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:40:38.510357 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:40:38.510368 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:40:38.510378 | orchestrator | 2026-02-15 04:40:38.510390 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:40:38.510401 | orchestrator | Sunday 15 February 2026 04:40:07 +0000 (0:00:00.851) 0:00:01.143 ******* 2026-02-15 04:40:38.510438 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-02-15 04:40:38.510450 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-02-15 04:40:38.510461 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-02-15 04:40:38.510472 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-02-15 04:40:38.510483 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-02-15 04:40:38.510494 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-02-15 04:40:38.510504 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-02-15 04:40:38.510515 | orchestrator | 2026-02-15 04:40:38.510526 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-02-15 04:40:38.510537 | orchestrator | 2026-02-15 04:40:38.510548 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-02-15 04:40:38.510559 | orchestrator | Sunday 15 February 2026 04:40:08 +0000 (0:00:00.737) 0:00:01.881 ******* 2026-02-15 04:40:38.510572 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:40:38.510584 | orchestrator | 2026-02-15 04:40:38.510595 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-02-15 04:40:38.510609 | orchestrator | Sunday 15 February 2026 04:40:10 +0000 (0:00:01.663) 0:00:03.544 ******* 2026-02-15 04:40:38.510622 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-02-15 04:40:38.510634 | orchestrator | 2026-02-15 04:40:38.510646 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-02-15 04:40:38.510659 | orchestrator | Sunday 15 February 2026 04:40:13 +0000 (0:00:03.637) 0:00:07.181 ******* 2026-02-15 04:40:38.510672 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-02-15 04:40:38.510686 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-02-15 04:40:38.510699 | orchestrator | 2026-02-15 04:40:38.510711 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-02-15 04:40:38.510725 | orchestrator | Sunday 15 February 2026 04:40:20 +0000 (0:00:06.245) 0:00:13.426 ******* 2026-02-15 04:40:38.510738 | orchestrator | ok: [testbed-manager] => (item=service) 2026-02-15 04:40:38.510750 | orchestrator | 2026-02-15 04:40:38.510762 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-02-15 04:40:38.510774 | orchestrator | Sunday 15 February 2026 04:40:23 +0000 (0:00:03.168) 0:00:16.595 ******* 2026-02-15 04:40:38.510787 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:40:38.510799 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-02-15 04:40:38.510812 | orchestrator | 2026-02-15 04:40:38.510824 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-02-15 04:40:38.510836 | orchestrator | Sunday 15 February 2026 04:40:27 +0000 (0:00:03.801) 0:00:20.397 ******* 2026-02-15 04:40:38.510849 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-02-15 04:40:38.510861 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-02-15 04:40:38.510908 | orchestrator | 2026-02-15 04:40:38.510927 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-02-15 04:40:38.510948 | orchestrator | Sunday 15 February 2026 04:40:33 +0000 (0:00:06.153) 0:00:26.551 ******* 2026-02-15 04:40:38.510968 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-02-15 04:40:38.510987 | orchestrator | 2026-02-15 04:40:38.511001 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:40:38.511012 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:40:38.511033 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:40:38.511044 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:40:38.511055 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:40:38.511066 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:40:38.511112 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:40:38.511125 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:40:38.511136 | orchestrator | 2026-02-15 04:40:38.511147 | orchestrator | 2026-02-15 04:40:38.511158 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:40:38.511169 | orchestrator | Sunday 15 February 2026 04:40:38 +0000 (0:00:04.749) 0:00:31.301 ******* 2026-02-15 04:40:38.511180 | orchestrator | =============================================================================== 2026-02-15 04:40:38.511191 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.25s 2026-02-15 04:40:38.511202 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.15s 2026-02-15 04:40:38.511213 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.75s 2026-02-15 04:40:38.511224 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.80s 2026-02-15 04:40:38.511234 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.64s 2026-02-15 04:40:38.511245 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.17s 2026-02-15 04:40:38.511256 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.66s 2026-02-15 04:40:38.511267 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.85s 2026-02-15 04:40:38.511278 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.74s 2026-02-15 04:40:40.910146 | orchestrator | 2026-02-15 04:40:40 | INFO  | Task 80f8e7a4-8d86-4598-9c35-753c5d729ba9 (gnocchi) was prepared for execution. 2026-02-15 04:40:40.910263 | orchestrator | 2026-02-15 04:40:40 | INFO  | It takes a moment until task 80f8e7a4-8d86-4598-9c35-753c5d729ba9 (gnocchi) has been started and output is visible here. 2026-02-15 04:40:46.301057 | orchestrator | 2026-02-15 04:40:46.301169 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:40:46.301186 | orchestrator | 2026-02-15 04:40:46.301198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:40:46.301210 | orchestrator | Sunday 15 February 2026 04:40:45 +0000 (0:00:00.277) 0:00:00.277 ******* 2026-02-15 04:40:46.301221 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:40:46.301234 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:40:46.301245 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:40:46.301256 | orchestrator | 2026-02-15 04:40:46.301267 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:40:46.301279 | orchestrator | Sunday 15 February 2026 04:40:45 +0000 (0:00:00.360) 0:00:00.638 ******* 2026-02-15 04:40:46.301291 | orchestrator | ok: [testbed-node-0] => (item=enable_gnocchi_False) 2026-02-15 04:40:46.301302 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_gnocchi_True 2026-02-15 04:40:46.301314 | orchestrator | ok: [testbed-node-1] => (item=enable_gnocchi_False) 2026-02-15 04:40:46.301325 | orchestrator | ok: [testbed-node-2] => (item=enable_gnocchi_False) 2026-02-15 04:40:46.301336 | orchestrator | 2026-02-15 04:40:46.301347 | orchestrator | PLAY [Apply role gnocchi] ****************************************************** 2026-02-15 04:40:46.301383 | orchestrator | skipping: no hosts matched 2026-02-15 04:40:46.301397 | orchestrator | 2026-02-15 04:40:46.301408 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:40:46.301419 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:40:46.301431 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:40:46.301442 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:40:46.301453 | orchestrator | 2026-02-15 04:40:46.301464 | orchestrator | 2026-02-15 04:40:46.301475 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:40:46.301486 | orchestrator | Sunday 15 February 2026 04:40:45 +0000 (0:00:00.361) 0:00:00.999 ******* 2026-02-15 04:40:46.301497 | orchestrator | =============================================================================== 2026-02-15 04:40:46.301508 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.36s 2026-02-15 04:40:46.301519 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2026-02-15 04:40:48.687068 | orchestrator | 2026-02-15 04:40:48 | INFO  | Task 05a2ddb5-78bd-4a42-a126-b9e8e83993a1 (manila) was prepared for execution. 2026-02-15 04:40:48.687141 | orchestrator | 2026-02-15 04:40:48 | INFO  | It takes a moment until task 05a2ddb5-78bd-4a42-a126-b9e8e83993a1 (manila) has been started and output is visible here. 2026-02-15 04:41:31.205947 | orchestrator | 2026-02-15 04:41:31.206139 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:41:31.206164 | orchestrator | 2026-02-15 04:41:31.206177 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:41:31.206189 | orchestrator | Sunday 15 February 2026 04:40:52 +0000 (0:00:00.259) 0:00:00.260 ******* 2026-02-15 04:41:31.206200 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:41:31.206213 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:41:31.206224 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:41:31.206235 | orchestrator | 2026-02-15 04:41:31.206246 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:41:31.206274 | orchestrator | Sunday 15 February 2026 04:40:53 +0000 (0:00:00.321) 0:00:00.581 ******* 2026-02-15 04:41:31.206286 | orchestrator | ok: [testbed-node-0] => (item=enable_manila_True) 2026-02-15 04:41:31.206298 | orchestrator | ok: [testbed-node-1] => (item=enable_manila_True) 2026-02-15 04:41:31.206309 | orchestrator | ok: [testbed-node-2] => (item=enable_manila_True) 2026-02-15 04:41:31.206320 | orchestrator | 2026-02-15 04:41:31.206331 | orchestrator | PLAY [Apply role manila] ******************************************************* 2026-02-15 04:41:31.206342 | orchestrator | 2026-02-15 04:41:31.206353 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-15 04:41:31.206364 | orchestrator | Sunday 15 February 2026 04:40:53 +0000 (0:00:00.443) 0:00:01.024 ******* 2026-02-15 04:41:31.206375 | orchestrator | included: /ansible/roles/manila/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:41:31.206387 | orchestrator | 2026-02-15 04:41:31.206398 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-15 04:41:31.206409 | orchestrator | Sunday 15 February 2026 04:40:54 +0000 (0:00:00.558) 0:00:01.582 ******* 2026-02-15 04:41:31.206420 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:41:31.206433 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:41:31.206446 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:41:31.206460 | orchestrator | 2026-02-15 04:41:31.206472 | orchestrator | TASK [service-ks-register : manila | Creating services] ************************ 2026-02-15 04:41:31.206486 | orchestrator | Sunday 15 February 2026 04:40:54 +0000 (0:00:00.481) 0:00:02.064 ******* 2026-02-15 04:41:31.206498 | orchestrator | changed: [testbed-node-0] => (item=manila (share)) 2026-02-15 04:41:31.206569 | orchestrator | changed: [testbed-node-0] => (item=manilav2 (sharev2)) 2026-02-15 04:41:31.206584 | orchestrator | 2026-02-15 04:41:31.206597 | orchestrator | TASK [service-ks-register : manila | Creating endpoints] *********************** 2026-02-15 04:41:31.206609 | orchestrator | Sunday 15 February 2026 04:41:01 +0000 (0:00:06.623) 0:00:08.688 ******* 2026-02-15 04:41:31.206622 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s -> internal) 2026-02-15 04:41:31.206636 | orchestrator | changed: [testbed-node-0] => (item=manila -> https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s -> public) 2026-02-15 04:41:31.206648 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api-int.testbed.osism.xyz:8786/v2 -> internal) 2026-02-15 04:41:31.206661 | orchestrator | changed: [testbed-node-0] => (item=manilav2 -> https://api.testbed.osism.xyz:8786/v2 -> public) 2026-02-15 04:41:31.206673 | orchestrator | 2026-02-15 04:41:31.206686 | orchestrator | TASK [service-ks-register : manila | Creating projects] ************************ 2026-02-15 04:41:31.206698 | orchestrator | Sunday 15 February 2026 04:41:14 +0000 (0:00:13.064) 0:00:21.753 ******* 2026-02-15 04:41:31.206711 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:41:31.206723 | orchestrator | 2026-02-15 04:41:31.206735 | orchestrator | TASK [service-ks-register : manila | Creating users] *************************** 2026-02-15 04:41:31.206747 | orchestrator | Sunday 15 February 2026 04:41:17 +0000 (0:00:03.457) 0:00:25.210 ******* 2026-02-15 04:41:31.206760 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:41:31.206772 | orchestrator | changed: [testbed-node-0] => (item=manila -> service) 2026-02-15 04:41:31.206785 | orchestrator | 2026-02-15 04:41:31.206796 | orchestrator | TASK [service-ks-register : manila | Creating roles] *************************** 2026-02-15 04:41:31.206807 | orchestrator | Sunday 15 February 2026 04:41:21 +0000 (0:00:03.944) 0:00:29.154 ******* 2026-02-15 04:41:31.206818 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:41:31.206830 | orchestrator | 2026-02-15 04:41:31.206841 | orchestrator | TASK [service-ks-register : manila | Granting user roles] ********************** 2026-02-15 04:41:31.206852 | orchestrator | Sunday 15 February 2026 04:41:25 +0000 (0:00:03.210) 0:00:32.365 ******* 2026-02-15 04:41:31.206862 | orchestrator | changed: [testbed-node-0] => (item=manila -> service -> admin) 2026-02-15 04:41:31.206873 | orchestrator | 2026-02-15 04:41:31.206918 | orchestrator | TASK [manila : Ensuring config directories exist] ****************************** 2026-02-15 04:41:31.206930 | orchestrator | Sunday 15 February 2026 04:41:28 +0000 (0:00:03.936) 0:00:36.301 ******* 2026-02-15 04:41:31.206965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:31.206989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:31.207011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:31.207023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:31.207035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:31.207047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:31.207067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:41.931877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:41.932057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:41.932074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:41.932087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:41.932099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:41.932112 | orchestrator | 2026-02-15 04:41:41.932126 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-15 04:41:41.932139 | orchestrator | Sunday 15 February 2026 04:41:31 +0000 (0:00:02.355) 0:00:38.657 ******* 2026-02-15 04:41:41.932151 | orchestrator | included: /ansible/roles/manila/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:41:41.932163 | orchestrator | 2026-02-15 04:41:41.932174 | orchestrator | TASK [manila : Ensuring manila service ceph config subdir exists] ************** 2026-02-15 04:41:41.932185 | orchestrator | Sunday 15 February 2026 04:41:31 +0000 (0:00:00.594) 0:00:39.252 ******* 2026-02-15 04:41:41.932197 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:41:41.932209 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:41:41.932220 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:41:41.932231 | orchestrator | 2026-02-15 04:41:41.932242 | orchestrator | TASK [manila : Copy over multiple ceph configs for Manila] ********************* 2026-02-15 04:41:41.932253 | orchestrator | Sunday 15 February 2026 04:41:32 +0000 (0:00:00.935) 0:00:40.187 ******* 2026-02-15 04:41:41.932275 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-15 04:41:41.932305 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-15 04:41:41.932324 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-15 04:41:41.932336 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-15 04:41:41.932348 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-15 04:41:41.932359 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-15 04:41:41.932370 | orchestrator | 2026-02-15 04:41:41.932381 | orchestrator | TASK [manila : Copy over ceph Manila keyrings] ********************************* 2026-02-15 04:41:41.932393 | orchestrator | Sunday 15 February 2026 04:41:34 +0000 (0:00:01.814) 0:00:42.001 ******* 2026-02-15 04:41:41.932407 | orchestrator | changed: [testbed-node-0] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-15 04:41:41.932419 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-15 04:41:41.932432 | orchestrator | changed: [testbed-node-1] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-15 04:41:41.932445 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-15 04:41:41.932459 | orchestrator | changed: [testbed-node-2] => (item={'name': 'cephfsnative1', 'share_name': 'CEPHFS1', 'driver': 'cephfsnative', 'cluster': 'ceph', 'enabled': True, 'protocols': ['CEPHFS']}) 2026-02-15 04:41:41.932470 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cephfsnfs1', 'share_name': 'CEPHFSNFS1', 'driver': 'cephfsnfs', 'cluster': 'ceph', 'enabled': False, 'protocols': ['NFS', 'CIFS']})  2026-02-15 04:41:41.932481 | orchestrator | 2026-02-15 04:41:41.932492 | orchestrator | TASK [manila : Ensuring config directory has correct owner and permission] ***** 2026-02-15 04:41:41.932504 | orchestrator | Sunday 15 February 2026 04:41:35 +0000 (0:00:01.244) 0:00:43.246 ******* 2026-02-15 04:41:41.932515 | orchestrator | ok: [testbed-node-0] => (item=manila-share) 2026-02-15 04:41:41.932527 | orchestrator | ok: [testbed-node-1] => (item=manila-share) 2026-02-15 04:41:41.932538 | orchestrator | ok: [testbed-node-2] => (item=manila-share) 2026-02-15 04:41:41.932565 | orchestrator | 2026-02-15 04:41:41.932589 | orchestrator | TASK [manila : Check if policies shall be overwritten] ************************* 2026-02-15 04:41:41.932600 | orchestrator | Sunday 15 February 2026 04:41:36 +0000 (0:00:00.702) 0:00:43.949 ******* 2026-02-15 04:41:41.932611 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:41:41.932622 | orchestrator | 2026-02-15 04:41:41.932634 | orchestrator | TASK [manila : Set manila policy file] ***************************************** 2026-02-15 04:41:41.932645 | orchestrator | Sunday 15 February 2026 04:41:36 +0000 (0:00:00.144) 0:00:44.094 ******* 2026-02-15 04:41:41.932656 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:41:41.932667 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:41:41.932685 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:41:41.932696 | orchestrator | 2026-02-15 04:41:41.932708 | orchestrator | TASK [manila : include_tasks] ************************************************** 2026-02-15 04:41:41.932719 | orchestrator | Sunday 15 February 2026 04:41:37 +0000 (0:00:00.515) 0:00:44.609 ******* 2026-02-15 04:41:41.932732 | orchestrator | included: /ansible/roles/manila/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:41:41.932750 | orchestrator | 2026-02-15 04:41:41.932767 | orchestrator | TASK [service-cert-copy : manila | Copying over extra CA certificates] ********* 2026-02-15 04:41:41.932785 | orchestrator | Sunday 15 February 2026 04:41:37 +0000 (0:00:00.618) 0:00:45.228 ******* 2026-02-15 04:41:41.932820 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:42.805791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:42.805957 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:42.805975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:42.805989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:42.806082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:42.806144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:42.806169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:42.806188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:42.806207 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:42.806225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:42.806259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:42.806278 | orchestrator | 2026-02-15 04:41:42.806298 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS certificate] *** 2026-02-15 04:41:42.806317 | orchestrator | Sunday 15 February 2026 04:41:42 +0000 (0:00:04.154) 0:00:49.383 ******* 2026-02-15 04:41:42.806359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 04:41:43.446572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:41:43.446674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:41:43.446692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 04:41:43.446732 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:41:43.446747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 04:41:43.446760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:41:43.446785 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:41:43.446814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 04:41:43.446826 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:41:43.446838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 04:41:43.446850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:41:43.446870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:41:43.446921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 04:41:43.446943 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:41:43.446963 | orchestrator | 2026-02-15 04:41:43.446983 | orchestrator | TASK [service-cert-copy : manila | Copying over backend internal TLS key] ****** 2026-02-15 04:41:43.446998 | orchestrator | Sunday 15 February 2026 04:41:42 +0000 (0:00:00.869) 0:00:50.252 ******* 2026-02-15 04:41:43.447026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 04:41:48.151053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:41:48.151193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:41:48.151239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 04:41:48.151255 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:41:48.151270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 04:41:48.151284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:41:48.151310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:41:48.151342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 04:41:48.151355 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:41:48.151367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 04:41:48.151388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:41:48.151400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:41:48.151413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 04:41:48.151424 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:41:48.151436 | orchestrator | 2026-02-15 04:41:48.151449 | orchestrator | TASK [manila : Copying over config.json files for services] ******************** 2026-02-15 04:41:48.151467 | orchestrator | Sunday 15 February 2026 04:41:43 +0000 (0:00:00.892) 0:00:51.145 ******* 2026-02-15 04:41:48.151488 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:54.854197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:54.854320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:54.854331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:54.854340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:54.854358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:54.854378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:54.854394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:54.854401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:54.854409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:54.854416 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:54.854427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:54.854435 | orchestrator | 2026-02-15 04:41:54.854445 | orchestrator | TASK [manila : Copying over manila.conf] *************************************** 2026-02-15 04:41:54.854459 | orchestrator | Sunday 15 February 2026 04:41:48 +0000 (0:00:04.670) 0:00:55.815 ******* 2026-02-15 04:41:54.854472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:59.126145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:59.126261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:41:59.126279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:59.126311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:41:59.126324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:59.126378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:41:59.126391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:59.126433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:41:59.126447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:59.126459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:59.126476 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:41:59.126498 | orchestrator | 2026-02-15 04:41:59.126512 | orchestrator | TASK [manila : Copying over manila-share.conf] ********************************* 2026-02-15 04:41:59.126526 | orchestrator | Sunday 15 February 2026 04:41:54 +0000 (0:00:06.486) 0:01:02.302 ******* 2026-02-15 04:41:59.126540 | orchestrator | changed: [testbed-node-0] => (item=manila-share) 2026-02-15 04:41:59.126553 | orchestrator | changed: [testbed-node-1] => (item=manila-share) 2026-02-15 04:41:59.126584 | orchestrator | changed: [testbed-node-2] => (item=manila-share) 2026-02-15 04:41:59.126609 | orchestrator | 2026-02-15 04:41:59.126623 | orchestrator | TASK [manila : Copying over existing policy file] ****************************** 2026-02-15 04:41:59.126635 | orchestrator | Sunday 15 February 2026 04:41:58 +0000 (0:00:03.644) 0:01:05.947 ******* 2026-02-15 04:41:59.126658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 04:42:02.433928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:42:02.434109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:42:02.434130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 04:42:02.434144 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:42:02.434172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 04:42:02.434214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:42:02.434233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:42:02.434271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 04:42:02.434291 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:42:02.434308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-02-15 04:42:02.434327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 04:42:02.434365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 04:42:02.434377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 04:42:02.434387 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:42:02.434397 | orchestrator | 2026-02-15 04:42:02.434411 | orchestrator | TASK [manila : Check manila containers] **************************************** 2026-02-15 04:42:02.434429 | orchestrator | Sunday 15 February 2026 04:41:59 +0000 (0:00:00.635) 0:01:06.582 ******* 2026-02-15 04:42:02.434459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:42:45.960358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:42:45.960478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-02-15 04:42:45.960533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:42:45.960548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:42:45.960560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}}) 2026-02-15 04:42:45.960589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:42:45.960603 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:42:45.960615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}}) 2026-02-15 04:42:45.960634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:42:45.960651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:42:45.960663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.1.1.20251130', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}}) 2026-02-15 04:42:45.960675 | orchestrator | 2026-02-15 04:42:45.960689 | orchestrator | TASK [manila : Creating Manila database] *************************************** 2026-02-15 04:42:45.960702 | orchestrator | Sunday 15 February 2026 04:42:02 +0000 (0:00:03.301) 0:01:09.884 ******* 2026-02-15 04:42:45.960713 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:42:45.960725 | orchestrator | 2026-02-15 04:42:45.960737 | orchestrator | TASK [manila : Creating Manila database user and setting permissions] ********** 2026-02-15 04:42:45.960748 | orchestrator | Sunday 15 February 2026 04:42:04 +0000 (0:00:02.171) 0:01:12.055 ******* 2026-02-15 04:42:45.960759 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:42:45.960770 | orchestrator | 2026-02-15 04:42:45.960781 | orchestrator | TASK [manila : Running Manila bootstrap container] ***************************** 2026-02-15 04:42:45.960792 | orchestrator | Sunday 15 February 2026 04:42:07 +0000 (0:00:02.421) 0:01:14.477 ******* 2026-02-15 04:42:45.960803 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:42:45.960814 | orchestrator | 2026-02-15 04:42:45.960826 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-15 04:42:45.960837 | orchestrator | Sunday 15 February 2026 04:42:45 +0000 (0:00:38.602) 0:01:53.079 ******* 2026-02-15 04:42:45.960848 | orchestrator | 2026-02-15 04:42:45.960866 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-15 04:43:40.975025 | orchestrator | Sunday 15 February 2026 04:42:45 +0000 (0:00:00.072) 0:01:53.151 ******* 2026-02-15 04:43:40.975162 | orchestrator | 2026-02-15 04:43:40.975192 | orchestrator | TASK [manila : Flush handlers] ************************************************* 2026-02-15 04:43:40.975211 | orchestrator | Sunday 15 February 2026 04:42:45 +0000 (0:00:00.071) 0:01:53.222 ******* 2026-02-15 04:43:40.975227 | orchestrator | 2026-02-15 04:43:40.975246 | orchestrator | RUNNING HANDLER [manila : Restart manila-api container] ************************ 2026-02-15 04:43:40.975266 | orchestrator | Sunday 15 February 2026 04:42:45 +0000 (0:00:00.083) 0:01:53.306 ******* 2026-02-15 04:43:40.975286 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:43:40.975336 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:43:40.975348 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:43:40.975359 | orchestrator | 2026-02-15 04:43:40.975371 | orchestrator | RUNNING HANDLER [manila : Restart manila-data container] *********************** 2026-02-15 04:43:40.975382 | orchestrator | Sunday 15 February 2026 04:43:01 +0000 (0:00:15.111) 0:02:08.418 ******* 2026-02-15 04:43:40.975393 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:43:40.975404 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:43:40.975415 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:43:40.975425 | orchestrator | 2026-02-15 04:43:40.975436 | orchestrator | RUNNING HANDLER [manila : Restart manila-scheduler container] ****************** 2026-02-15 04:43:40.975447 | orchestrator | Sunday 15 February 2026 04:43:11 +0000 (0:00:10.561) 0:02:18.980 ******* 2026-02-15 04:43:40.975458 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:43:40.975469 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:43:40.975480 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:43:40.975491 | orchestrator | 2026-02-15 04:43:40.975505 | orchestrator | RUNNING HANDLER [manila : Restart manila-share container] ********************** 2026-02-15 04:43:40.975518 | orchestrator | Sunday 15 February 2026 04:43:22 +0000 (0:00:10.497) 0:02:29.477 ******* 2026-02-15 04:43:40.975530 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:43:40.975543 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:43:40.975556 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:43:40.975568 | orchestrator | 2026-02-15 04:43:40.975580 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:43:40.975595 | orchestrator | testbed-node-0 : ok=28  changed=20  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 04:43:40.975610 | orchestrator | testbed-node-1 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 04:43:40.975623 | orchestrator | testbed-node-2 : ok=19  changed=13  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 04:43:40.975636 | orchestrator | 2026-02-15 04:43:40.975649 | orchestrator | 2026-02-15 04:43:40.975662 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:43:40.975690 | orchestrator | Sunday 15 February 2026 04:43:40 +0000 (0:00:18.399) 0:02:47.877 ******* 2026-02-15 04:43:40.975703 | orchestrator | =============================================================================== 2026-02-15 04:43:40.975717 | orchestrator | manila : Running Manila bootstrap container ---------------------------- 38.60s 2026-02-15 04:43:40.975729 | orchestrator | manila : Restart manila-share container -------------------------------- 18.40s 2026-02-15 04:43:40.975742 | orchestrator | manila : Restart manila-api container ---------------------------------- 15.11s 2026-02-15 04:43:40.975755 | orchestrator | service-ks-register : manila | Creating endpoints ---------------------- 13.06s 2026-02-15 04:43:40.975768 | orchestrator | manila : Restart manila-data container --------------------------------- 10.56s 2026-02-15 04:43:40.975780 | orchestrator | manila : Restart manila-scheduler container ---------------------------- 10.50s 2026-02-15 04:43:40.975797 | orchestrator | service-ks-register : manila | Creating services ------------------------ 6.62s 2026-02-15 04:43:40.975816 | orchestrator | manila : Copying over manila.conf --------------------------------------- 6.49s 2026-02-15 04:43:40.975830 | orchestrator | manila : Copying over config.json files for services -------------------- 4.67s 2026-02-15 04:43:40.975842 | orchestrator | service-cert-copy : manila | Copying over extra CA certificates --------- 4.15s 2026-02-15 04:43:40.975856 | orchestrator | service-ks-register : manila | Creating users --------------------------- 3.94s 2026-02-15 04:43:40.975867 | orchestrator | service-ks-register : manila | Granting user roles ---------------------- 3.94s 2026-02-15 04:43:40.975878 | orchestrator | manila : Copying over manila-share.conf --------------------------------- 3.64s 2026-02-15 04:43:40.975889 | orchestrator | service-ks-register : manila | Creating projects ------------------------ 3.46s 2026-02-15 04:43:40.975908 | orchestrator | manila : Check manila containers ---------------------------------------- 3.30s 2026-02-15 04:43:40.975954 | orchestrator | service-ks-register : manila | Creating roles --------------------------- 3.21s 2026-02-15 04:43:40.975966 | orchestrator | manila : Creating Manila database user and setting permissions ---------- 2.42s 2026-02-15 04:43:40.975977 | orchestrator | manila : Ensuring config directories exist ------------------------------ 2.36s 2026-02-15 04:43:40.975987 | orchestrator | manila : Creating Manila database --------------------------------------- 2.17s 2026-02-15 04:43:40.975999 | orchestrator | manila : Copy over multiple ceph configs for Manila --------------------- 1.81s 2026-02-15 04:43:41.290720 | orchestrator | + sh -c /opt/configuration/scripts/deploy/400-monitoring.sh 2026-02-15 04:43:53.436600 | orchestrator | 2026-02-15 04:43:53 | INFO  | Task c3c69a83-a9b0-4889-94ea-a9f97e3cbb93 (netdata) was prepared for execution. 2026-02-15 04:43:53.436731 | orchestrator | 2026-02-15 04:43:53 | INFO  | It takes a moment until task c3c69a83-a9b0-4889-94ea-a9f97e3cbb93 (netdata) has been started and output is visible here. 2026-02-15 04:45:30.452459 | orchestrator | 2026-02-15 04:45:30.452568 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:45:30.452584 | orchestrator | 2026-02-15 04:45:30.452595 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:45:30.452606 | orchestrator | Sunday 15 February 2026 04:43:57 +0000 (0:00:00.239) 0:00:00.239 ******* 2026-02-15 04:45:30.452616 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-02-15 04:45:30.452626 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-02-15 04:45:30.452636 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-02-15 04:45:30.452646 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-02-15 04:45:30.452656 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-02-15 04:45:30.452665 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-02-15 04:45:30.452675 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-02-15 04:45:30.452684 | orchestrator | 2026-02-15 04:45:30.452694 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-02-15 04:45:30.452704 | orchestrator | 2026-02-15 04:45:30.452714 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-02-15 04:45:30.452723 | orchestrator | Sunday 15 February 2026 04:43:58 +0000 (0:00:00.848) 0:00:01.087 ******* 2026-02-15 04:45:30.452735 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:45:30.452747 | orchestrator | 2026-02-15 04:45:30.452758 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-02-15 04:45:30.452768 | orchestrator | Sunday 15 February 2026 04:43:59 +0000 (0:00:01.292) 0:00:02.380 ******* 2026-02-15 04:45:30.452778 | orchestrator | ok: [testbed-manager] 2026-02-15 04:45:30.452788 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:45:30.452815 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:45:30.452834 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:45:30.452845 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:45:30.452855 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:45:30.452865 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:45:30.452875 | orchestrator | 2026-02-15 04:45:30.452884 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-02-15 04:45:30.452894 | orchestrator | Sunday 15 February 2026 04:44:01 +0000 (0:00:01.911) 0:00:04.292 ******* 2026-02-15 04:45:30.452904 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:45:30.452914 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:45:30.452923 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:45:30.452933 | orchestrator | ok: [testbed-manager] 2026-02-15 04:45:30.452943 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:45:30.452995 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:45:30.453008 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:45:30.453020 | orchestrator | 2026-02-15 04:45:30.453046 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-02-15 04:45:30.453058 | orchestrator | Sunday 15 February 2026 04:44:04 +0000 (0:00:02.254) 0:00:06.546 ******* 2026-02-15 04:45:30.453069 | orchestrator | changed: [testbed-manager] 2026-02-15 04:45:30.453080 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:45:30.453090 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:45:30.453101 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:45:30.453112 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:45:30.453123 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:45:30.453134 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:45:30.453145 | orchestrator | 2026-02-15 04:45:30.453156 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-02-15 04:45:30.453167 | orchestrator | Sunday 15 February 2026 04:44:05 +0000 (0:00:01.661) 0:00:08.208 ******* 2026-02-15 04:45:30.453178 | orchestrator | changed: [testbed-manager] 2026-02-15 04:45:30.453189 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:45:30.453200 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:45:30.453211 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:45:30.453222 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:45:30.453232 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:45:30.453243 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:45:30.453253 | orchestrator | 2026-02-15 04:45:30.453264 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-02-15 04:45:30.453276 | orchestrator | Sunday 15 February 2026 04:44:24 +0000 (0:00:18.343) 0:00:26.552 ******* 2026-02-15 04:45:30.453286 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:45:30.453297 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:45:30.453309 | orchestrator | changed: [testbed-manager] 2026-02-15 04:45:30.453319 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:45:30.453330 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:45:30.453341 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:45:30.453351 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:45:30.453360 | orchestrator | 2026-02-15 04:45:30.453370 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-02-15 04:45:30.453380 | orchestrator | Sunday 15 February 2026 04:45:03 +0000 (0:00:39.220) 0:01:05.772 ******* 2026-02-15 04:45:30.453390 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:45:30.453402 | orchestrator | 2026-02-15 04:45:30.453412 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-02-15 04:45:30.453422 | orchestrator | Sunday 15 February 2026 04:45:04 +0000 (0:00:01.575) 0:01:07.348 ******* 2026-02-15 04:45:30.453431 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-02-15 04:45:30.453441 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-02-15 04:45:30.453451 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-02-15 04:45:30.453461 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-02-15 04:45:30.453486 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-02-15 04:45:30.453497 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-02-15 04:45:30.453507 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-02-15 04:45:30.453516 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-02-15 04:45:30.453526 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-02-15 04:45:30.453535 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-02-15 04:45:30.453545 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-02-15 04:45:30.453555 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-02-15 04:45:30.453572 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-02-15 04:45:30.453582 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-02-15 04:45:30.453592 | orchestrator | 2026-02-15 04:45:30.453602 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-02-15 04:45:30.453612 | orchestrator | Sunday 15 February 2026 04:45:08 +0000 (0:00:03.339) 0:01:10.688 ******* 2026-02-15 04:45:30.453622 | orchestrator | ok: [testbed-manager] 2026-02-15 04:45:30.453631 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:45:30.453641 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:45:30.453651 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:45:30.453661 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:45:30.453670 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:45:30.453680 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:45:30.453689 | orchestrator | 2026-02-15 04:45:30.453699 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-02-15 04:45:30.453709 | orchestrator | Sunday 15 February 2026 04:45:09 +0000 (0:00:01.100) 0:01:11.789 ******* 2026-02-15 04:45:30.453719 | orchestrator | changed: [testbed-manager] 2026-02-15 04:45:30.453728 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:45:30.453738 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:45:30.453884 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:45:30.453898 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:45:30.453908 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:45:30.453917 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:45:30.453932 | orchestrator | 2026-02-15 04:45:30.453948 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-02-15 04:45:30.453988 | orchestrator | Sunday 15 February 2026 04:45:10 +0000 (0:00:01.149) 0:01:12.938 ******* 2026-02-15 04:45:30.454005 | orchestrator | ok: [testbed-manager] 2026-02-15 04:45:30.454093 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:45:30.454112 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:45:30.454129 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:45:30.454139 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:45:30.454148 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:45:30.454157 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:45:30.454167 | orchestrator | 2026-02-15 04:45:30.454177 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-02-15 04:45:30.454186 | orchestrator | Sunday 15 February 2026 04:45:11 +0000 (0:00:01.149) 0:01:14.087 ******* 2026-02-15 04:45:30.454196 | orchestrator | ok: [testbed-manager] 2026-02-15 04:45:30.454213 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:45:30.454223 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:45:30.454233 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:45:30.454242 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:45:30.454252 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:45:30.454262 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:45:30.454272 | orchestrator | 2026-02-15 04:45:30.454284 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-02-15 04:45:30.454300 | orchestrator | Sunday 15 February 2026 04:45:13 +0000 (0:00:02.054) 0:01:16.142 ******* 2026-02-15 04:45:30.454316 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-02-15 04:45:30.454335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:45:30.454351 | orchestrator | 2026-02-15 04:45:30.454368 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-02-15 04:45:30.454385 | orchestrator | Sunday 15 February 2026 04:45:15 +0000 (0:00:01.482) 0:01:17.625 ******* 2026-02-15 04:45:30.454400 | orchestrator | changed: [testbed-manager] 2026-02-15 04:45:30.454417 | orchestrator | 2026-02-15 04:45:30.454428 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-02-15 04:45:30.454437 | orchestrator | Sunday 15 February 2026 04:45:18 +0000 (0:00:03.253) 0:01:20.878 ******* 2026-02-15 04:45:30.454458 | orchestrator | changed: [testbed-manager] 2026-02-15 04:45:30.454468 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:45:30.454478 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:45:30.454487 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:45:30.454498 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:45:30.454508 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:45:30.454518 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:45:30.454529 | orchestrator | 2026-02-15 04:45:30.454540 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:45:30.454551 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:45:30.454563 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:45:30.454574 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:45:30.454585 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:45:30.454608 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:45:30.925495 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:45:30.925601 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:45:30.925617 | orchestrator | 2026-02-15 04:45:30.925629 | orchestrator | 2026-02-15 04:45:30.925642 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:45:30.925654 | orchestrator | Sunday 15 February 2026 04:45:30 +0000 (0:00:11.950) 0:01:32.829 ******* 2026-02-15 04:45:30.925665 | orchestrator | =============================================================================== 2026-02-15 04:45:30.925676 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 39.22s 2026-02-15 04:45:30.925687 | orchestrator | osism.services.netdata : Add repository -------------------------------- 18.34s 2026-02-15 04:45:30.925698 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.95s 2026-02-15 04:45:30.925709 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 3.34s 2026-02-15 04:45:30.925720 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 3.25s 2026-02-15 04:45:30.925731 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.25s 2026-02-15 04:45:30.925742 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.05s 2026-02-15 04:45:30.925753 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.91s 2026-02-15 04:45:30.925764 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 1.66s 2026-02-15 04:45:30.925775 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.58s 2026-02-15 04:45:30.925785 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.48s 2026-02-15 04:45:30.925796 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.29s 2026-02-15 04:45:30.925807 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.15s 2026-02-15 04:45:30.925818 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.15s 2026-02-15 04:45:30.925829 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.10s 2026-02-15 04:45:30.925841 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2026-02-15 04:45:33.369514 | orchestrator | 2026-02-15 04:45:33 | INFO  | Task 87c22ff5-6fc7-43e1-8780-fc8d7e0b8c63 (prometheus) was prepared for execution. 2026-02-15 04:45:33.369607 | orchestrator | 2026-02-15 04:45:33 | INFO  | It takes a moment until task 87c22ff5-6fc7-43e1-8780-fc8d7e0b8c63 (prometheus) has been started and output is visible here. 2026-02-15 04:45:42.890542 | orchestrator | 2026-02-15 04:45:42.890666 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:45:42.890683 | orchestrator | 2026-02-15 04:45:42.890695 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:45:42.890707 | orchestrator | Sunday 15 February 2026 04:45:37 +0000 (0:00:00.278) 0:00:00.278 ******* 2026-02-15 04:45:42.890718 | orchestrator | ok: [testbed-manager] 2026-02-15 04:45:42.890730 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:45:42.890742 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:45:42.890753 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:45:42.890764 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:45:42.890775 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:45:42.890785 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:45:42.890796 | orchestrator | 2026-02-15 04:45:42.890807 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:45:42.890818 | orchestrator | Sunday 15 February 2026 04:45:38 +0000 (0:00:00.858) 0:00:01.136 ******* 2026-02-15 04:45:42.890830 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-02-15 04:45:42.890841 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-02-15 04:45:42.890852 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-02-15 04:45:42.890863 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-02-15 04:45:42.890874 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-02-15 04:45:42.890885 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-02-15 04:45:42.890896 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-02-15 04:45:42.890907 | orchestrator | 2026-02-15 04:45:42.890918 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-02-15 04:45:42.890928 | orchestrator | 2026-02-15 04:45:42.890939 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-15 04:45:42.890950 | orchestrator | Sunday 15 February 2026 04:45:39 +0000 (0:00:00.914) 0:00:02.051 ******* 2026-02-15 04:45:42.890989 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:45:42.891002 | orchestrator | 2026-02-15 04:45:42.891013 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-02-15 04:45:42.891024 | orchestrator | Sunday 15 February 2026 04:45:40 +0000 (0:00:01.413) 0:00:03.465 ******* 2026-02-15 04:45:42.891040 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-15 04:45:42.891056 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:42.891104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:42.891147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:42.891195 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:42.891213 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:42.891227 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:42.891241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:42.891255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:42.891268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:42.891291 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:42.891321 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:43.927622 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:43.927710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:43.927725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:43.927737 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:43.927751 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-15 04:45:43.927788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:43.927820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:45:43.927829 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:45:43.927835 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:43.927841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:43.927848 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:43.927859 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:43.927865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:43.927876 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:45:43.927888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:49.007031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:49.007130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:49.007141 | orchestrator | 2026-02-15 04:45:49.007149 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-02-15 04:45:49.007158 | orchestrator | Sunday 15 February 2026 04:45:43 +0000 (0:00:03.042) 0:00:06.507 ******* 2026-02-15 04:45:49.007166 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 04:45:49.007174 | orchestrator | 2026-02-15 04:45:49.007182 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-02-15 04:45:49.007209 | orchestrator | Sunday 15 February 2026 04:45:45 +0000 (0:00:01.768) 0:00:08.275 ******* 2026-02-15 04:45:49.007218 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-15 04:45:49.007226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:49.007233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:49.007254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:49.007277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:49.007284 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:49.007291 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:49.007304 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:49.007311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:49.007318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:49.007326 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:49.007337 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:49.007350 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:51.208850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:51.209044 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:51.209064 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:51.209079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:51.209091 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:45:51.209119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:51.209131 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:45:51.209162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:45:51.209177 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-15 04:45:51.209200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:51.209212 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:51.209225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:51.209242 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:51.209254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:51.209274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:52.146461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:52.146565 | orchestrator | 2026-02-15 04:45:52.146582 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-02-15 04:45:52.146595 | orchestrator | Sunday 15 February 2026 04:45:51 +0000 (0:00:05.503) 0:00:13.779 ******* 2026-02-15 04:45:52.146609 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-15 04:45:52.146623 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:52.146635 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:52.146668 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-15 04:45:52.146699 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:52.146732 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:45:52.146745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:52.146757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:52.146769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:52.146808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:52.146821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:52.146833 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:45:52.146953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:52.146997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:52.147031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:52.729308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:52.729410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:52.729428 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:45:52.729443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:52.729456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:52.729484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:52.729496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:52.729528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:52.729540 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:45:52.729571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:52.729586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:52.729605 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 04:45:52.729623 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:45:52.729656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:52.729676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:52.729702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 04:45:52.729733 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:45:52.729752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:52.729786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:53.791842 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 04:45:53.791945 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:45:53.792034 | orchestrator | 2026-02-15 04:45:53.792058 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-02-15 04:45:53.792078 | orchestrator | Sunday 15 February 2026 04:45:52 +0000 (0:00:01.525) 0:00:15.305 ******* 2026-02-15 04:45:53.792090 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-02-15 04:45:53.792103 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:53.792130 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:53.792163 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-02-15 04:45:53.792192 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:53.792205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:53.792215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:53.792226 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:45:53.792236 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:53.792246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:53.792269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:53.792280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:53.792290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:53.792307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:54.934894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:54.935021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:54.935036 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:45:54.935046 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:45:54.935055 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:54.935081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:54.935101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:54.935110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:54.935118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 04:45:54.935126 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:45:54.935148 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:54.935157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:54.935165 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 04:45:54.935172 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:45:54.935180 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:54.935197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:54.935206 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 04:45:54.935220 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:45:54.935232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 04:45:54.935252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 04:45:58.679748 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 04:45:58.679875 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:45:58.679903 | orchestrator | 2026-02-15 04:45:58.680011 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-02-15 04:45:58.680038 | orchestrator | Sunday 15 February 2026 04:45:54 +0000 (0:00:02.205) 0:00:17.511 ******* 2026-02-15 04:45:58.680059 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-15 04:45:58.680114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:58.680155 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:58.680172 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:58.680184 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:58.680215 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:58.680228 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:58.680240 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:45:58.680264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:58.680288 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:58.680325 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:58.680347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:45:58.680366 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:45:58.680397 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:46:01.385352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:46:01.385464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:46:01.385478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:46:01.385502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:46:01.385512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:46:01.385524 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-15 04:46:01.385549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:46:01.385559 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:46:01.385582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:46:01.385591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:46:01.385605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:46:01.385614 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:46:01.385623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:46:01.385632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:46:01.385649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:46:05.328922 | orchestrator | 2026-02-15 04:46:05.329086 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-02-15 04:46:05.329113 | orchestrator | Sunday 15 February 2026 04:46:01 +0000 (0:00:06.447) 0:00:23.958 ******* 2026-02-15 04:46:05.329132 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 04:46:05.329151 | orchestrator | 2026-02-15 04:46:05.329168 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-02-15 04:46:05.329185 | orchestrator | Sunday 15 February 2026 04:46:02 +0000 (0:00:00.972) 0:00:24.930 ******* 2026-02-15 04:46:05.329207 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092679, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.401384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329231 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092679, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.401384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329274 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092679, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.401384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329296 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092679, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.401384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:05.329317 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092695, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4053035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329339 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092679, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.401384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329407 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092679, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.401384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329422 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092695, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4053035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329434 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092695, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4053035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329451 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092695, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4053035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329464 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1092679, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.401384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329477 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092695, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4053035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329497 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092670, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.400923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:05.329518 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092670, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.400923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234359 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092670, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.400923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234459 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092670, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.400923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234491 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092670, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.400923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234504 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092695, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4053035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234515 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1092695, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4053035, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:07.234546 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092689, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4038908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234558 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092689, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4038908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234587 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092689, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4038908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234600 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092689, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4038908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234617 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092689, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4038908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234628 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092670, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.400923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234639 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092665, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3983383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234663 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092665, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3983383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234682 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092665, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3983383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:07.234710 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092665, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3983383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887140 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092665, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3983383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887241 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092689, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4038908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887257 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092680, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4017859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887270 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092680, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4017859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887301 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092680, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4017859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887313 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092680, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4017859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887325 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092687, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4036143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887352 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092680, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4017859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887369 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092665, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3983383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887388 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1092670, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.400923, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:08.887419 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092687, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4036143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887441 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092682, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.402656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887460 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092687, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4036143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887480 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092687, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4036143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:08.887512 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092687, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4036143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029207 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092677, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4011242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029270 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092682, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.402656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029302 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092680, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4017859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029315 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092682, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.402656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029325 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092682, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.402656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029336 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092682, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.402656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029347 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1092689, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4038908, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:10.029376 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092687, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4036143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029389 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092677, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4011242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029407 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092677, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4011242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029414 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092677, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4011242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029421 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092694, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4051197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029427 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092682, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.402656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029434 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092677, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4011242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:10.029448 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092677, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4011242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.531858 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092694, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4051197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.531969 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092694, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4051197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532033 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092661, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3975554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532046 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092694, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4051197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532058 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092694, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4051197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532069 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092694, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4051197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532094 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092661, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3975554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532130 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092661, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3975554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532143 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092711, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532155 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092661, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3975554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532166 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1092665, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3983383, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:11.532178 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092661, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3975554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532189 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092661, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3975554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532205 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092711, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:11.532233 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092711, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936328 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092692, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.404843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936424 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092711, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936438 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092692, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.404843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936448 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092711, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936458 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092711, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936481 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092692, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.404843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936513 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092692, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.404843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936541 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092667, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3989353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936554 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1092680, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4017859, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:12.936566 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092667, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3989353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936578 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092667, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3989353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936590 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092692, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.404843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936613 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092692, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.404843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936626 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092667, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3989353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:12.936644 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092663, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3979125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539449 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092663, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3979125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539538 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092667, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3989353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539553 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092667, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3989353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539565 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092686, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4033208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539598 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092663, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3979125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539622 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092686, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4033208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539634 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092663, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3979125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539662 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092684, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4028785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539675 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1092687, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4036143, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:14.539687 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092663, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3979125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539699 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092663, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3979125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539717 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092686, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4033208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539733 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092708, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539746 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:46:14.539759 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092686, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4033208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:14.539776 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092684, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4028785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.343687 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092686, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4033208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.343795 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092686, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4033208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.343838 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092684, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4028785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.343851 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092684, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4028785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.343878 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092684, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4028785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.343890 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092684, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4028785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.343902 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092708, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.343914 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:46:22.343946 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092708, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.343959 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:46:22.343970 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092708, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.344044 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:46:22.344058 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092708, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.344069 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:46:22.344086 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1092682, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.402656, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:22.344099 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092708, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-02-15 04:46:22.344110 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:46:22.344122 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1092677, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4011242, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:22.344142 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092694, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4051197, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:46.724640 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092661, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3975554, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:46.724782 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1092711, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:46.724799 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1092692, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.404843, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:46.724825 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1092667, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3989353, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:46.724839 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1092663, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3979125, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:46.724850 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1092686, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4033208, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:46.724863 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1092684, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.4028785, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:46.724893 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1092708, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.407786, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-02-15 04:46:46.724914 | orchestrator | 2026-02-15 04:46:46.724928 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-02-15 04:46:46.724941 | orchestrator | Sunday 15 February 2026 04:46:27 +0000 (0:00:25.034) 0:00:49.964 ******* 2026-02-15 04:46:46.724952 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 04:46:46.724964 | orchestrator | 2026-02-15 04:46:46.724976 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-02-15 04:46:46.724987 | orchestrator | Sunday 15 February 2026 04:46:28 +0000 (0:00:00.709) 0:00:50.674 ******* 2026-02-15 04:46:46.725045 | orchestrator | [WARNING]: Skipped 2026-02-15 04:46:46.725057 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725070 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-02-15 04:46:46.725081 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725092 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-02-15 04:46:46.725103 | orchestrator | [WARNING]: Skipped 2026-02-15 04:46:46.725114 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725125 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-02-15 04:46:46.725137 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725149 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-02-15 04:46:46.725162 | orchestrator | [WARNING]: Skipped 2026-02-15 04:46:46.725174 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725186 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-02-15 04:46:46.725199 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725212 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-02-15 04:46:46.725224 | orchestrator | [WARNING]: Skipped 2026-02-15 04:46:46.725238 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725250 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-02-15 04:46:46.725263 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725277 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-02-15 04:46:46.725289 | orchestrator | [WARNING]: Skipped 2026-02-15 04:46:46.725308 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725321 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-02-15 04:46:46.725334 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725347 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-02-15 04:46:46.725360 | orchestrator | [WARNING]: Skipped 2026-02-15 04:46:46.725372 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725385 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-02-15 04:46:46.725398 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725410 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-02-15 04:46:46.725422 | orchestrator | [WARNING]: Skipped 2026-02-15 04:46:46.725434 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725447 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-02-15 04:46:46.725459 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-02-15 04:46:46.725472 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-02-15 04:46:46.725492 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:46:46.725503 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 04:46:46.725514 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-02-15 04:46:46.725525 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-02-15 04:46:46.725536 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-02-15 04:46:46.725547 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-02-15 04:46:46.725557 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-02-15 04:46:46.725568 | orchestrator | 2026-02-15 04:46:46.725580 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-02-15 04:46:46.725591 | orchestrator | Sunday 15 February 2026 04:46:29 +0000 (0:00:01.823) 0:00:52.497 ******* 2026-02-15 04:46:46.725602 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-15 04:46:46.725614 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:46:46.725625 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-15 04:46:46.725636 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:46:46.725647 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-15 04:46:46.725658 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:46:46.725676 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-15 04:47:03.667741 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:47:03.667850 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-15 04:47:03.667867 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:47:03.667879 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-02-15 04:47:03.667890 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:47:03.667902 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-02-15 04:47:03.667912 | orchestrator | 2026-02-15 04:47:03.667925 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-02-15 04:47:03.667936 | orchestrator | Sunday 15 February 2026 04:46:46 +0000 (0:00:16.805) 0:01:09.303 ******* 2026-02-15 04:47:03.667947 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-15 04:47:03.667958 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:47:03.667969 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-15 04:47:03.667980 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:47:03.667990 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-15 04:47:03.668077 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-15 04:47:03.668096 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:47:03.668110 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:47:03.668122 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-15 04:47:03.668133 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:47:03.668144 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-02-15 04:47:03.668155 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:47:03.668165 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-02-15 04:47:03.668176 | orchestrator | 2026-02-15 04:47:03.668189 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-02-15 04:47:03.668200 | orchestrator | Sunday 15 February 2026 04:46:49 +0000 (0:00:02.981) 0:01:12.284 ******* 2026-02-15 04:47:03.668211 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-15 04:47:03.668250 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:47:03.668263 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-15 04:47:03.668276 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:47:03.668304 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-15 04:47:03.668318 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:47:03.668331 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-15 04:47:03.668344 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:47:03.668357 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-15 04:47:03.668370 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:47:03.668383 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-02-15 04:47:03.668397 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:47:03.668410 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-02-15 04:47:03.668422 | orchestrator | 2026-02-15 04:47:03.668436 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-02-15 04:47:03.668449 | orchestrator | Sunday 15 February 2026 04:46:51 +0000 (0:00:01.759) 0:01:14.044 ******* 2026-02-15 04:47:03.668462 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 04:47:03.668475 | orchestrator | 2026-02-15 04:47:03.668486 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-02-15 04:47:03.668498 | orchestrator | Sunday 15 February 2026 04:46:52 +0000 (0:00:00.760) 0:01:14.805 ******* 2026-02-15 04:47:03.668508 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:47:03.668519 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:47:03.668532 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:47:03.668550 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:47:03.668577 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:47:03.668596 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:47:03.668613 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:47:03.668630 | orchestrator | 2026-02-15 04:47:03.668647 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-02-15 04:47:03.668664 | orchestrator | Sunday 15 February 2026 04:46:52 +0000 (0:00:00.779) 0:01:15.584 ******* 2026-02-15 04:47:03.668682 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:47:03.668699 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:47:03.668719 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:47:03.668738 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:47:03.668756 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:47:03.668799 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:47:03.668811 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:47:03.668821 | orchestrator | 2026-02-15 04:47:03.668832 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-02-15 04:47:03.668862 | orchestrator | Sunday 15 February 2026 04:46:55 +0000 (0:00:02.104) 0:01:17.689 ******* 2026-02-15 04:47:03.668874 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-15 04:47:03.668885 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:47:03.668896 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-15 04:47:03.668907 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-15 04:47:03.668918 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:47:03.668929 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:47:03.668940 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-15 04:47:03.668962 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:47:03.668973 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-15 04:47:03.668984 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:47:03.669017 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-15 04:47:03.669031 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:47:03.669042 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-02-15 04:47:03.669053 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:47:03.669064 | orchestrator | 2026-02-15 04:47:03.669075 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-02-15 04:47:03.669086 | orchestrator | Sunday 15 February 2026 04:46:56 +0000 (0:00:01.487) 0:01:19.177 ******* 2026-02-15 04:47:03.669096 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-15 04:47:03.669108 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-15 04:47:03.669118 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-15 04:47:03.669129 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:47:03.669140 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:47:03.669151 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:47:03.669162 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-15 04:47:03.669173 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:47:03.669183 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-15 04:47:03.669194 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:47:03.669205 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-02-15 04:47:03.669224 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-02-15 04:47:03.669235 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:47:03.669246 | orchestrator | 2026-02-15 04:47:03.669256 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-02-15 04:47:03.669267 | orchestrator | Sunday 15 February 2026 04:46:58 +0000 (0:00:01.592) 0:01:20.770 ******* 2026-02-15 04:47:03.669278 | orchestrator | [WARNING]: Skipped 2026-02-15 04:47:03.669291 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-02-15 04:47:03.669301 | orchestrator | due to this access issue: 2026-02-15 04:47:03.669312 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-02-15 04:47:03.669323 | orchestrator | not a directory 2026-02-15 04:47:03.669334 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 04:47:03.669345 | orchestrator | 2026-02-15 04:47:03.669356 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-02-15 04:47:03.669366 | orchestrator | Sunday 15 February 2026 04:46:59 +0000 (0:00:01.144) 0:01:21.914 ******* 2026-02-15 04:47:03.669377 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:47:03.669388 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:47:03.669399 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:47:03.669410 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:47:03.669421 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:47:03.669432 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:47:03.669443 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:47:03.669454 | orchestrator | 2026-02-15 04:47:03.669465 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-02-15 04:47:03.669475 | orchestrator | Sunday 15 February 2026 04:47:00 +0000 (0:00:00.927) 0:01:22.841 ******* 2026-02-15 04:47:03.669493 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:47:03.669504 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:47:03.669515 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:47:03.669526 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:47:03.669536 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:47:03.669547 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:47:03.669558 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:47:03.669569 | orchestrator | 2026-02-15 04:47:03.669580 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-02-15 04:47:03.669591 | orchestrator | Sunday 15 February 2026 04:47:01 +0000 (0:00:00.931) 0:01:23.773 ******* 2026-02-15 04:47:03.669613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:47:05.369726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:47:05.369870 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-02-15 04:47:05.369898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:47:05.369941 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:47:05.369962 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:47:05.370133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:47:05.370163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:47:05.370201 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-02-15 04:47:05.370215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:47:05.370229 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:47:05.370244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:47:05.370265 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:47:05.370290 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:47:05.370304 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:47:05.370327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:47:07.357493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:47:07.357616 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:47:07.357632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:47:07.357662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-02-15 04:47:07.357673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:47:07.357706 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-02-15 04:47:07.357737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:47:07.357749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:47:07.357759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-02-15 04:47:07.357770 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:47:07.357785 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:47:07.357802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:47:07.357812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 04:47:07.357823 | orchestrator | 2026-02-15 04:47:07.357835 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-02-15 04:47:07.357847 | orchestrator | Sunday 15 February 2026 04:47:05 +0000 (0:00:04.180) 0:01:27.953 ******* 2026-02-15 04:47:07.357858 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-15 04:47:07.357868 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:47:07.357878 | orchestrator | 2026-02-15 04:47:07.357888 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-15 04:47:07.357898 | orchestrator | Sunday 15 February 2026 04:47:06 +0000 (0:00:01.284) 0:01:29.238 ******* 2026-02-15 04:47:07.357908 | orchestrator | 2026-02-15 04:47:07.357918 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-15 04:47:07.357927 | orchestrator | Sunday 15 February 2026 04:47:06 +0000 (0:00:00.251) 0:01:29.490 ******* 2026-02-15 04:47:07.357937 | orchestrator | 2026-02-15 04:47:07.357947 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-15 04:47:07.357956 | orchestrator | Sunday 15 February 2026 04:47:06 +0000 (0:00:00.071) 0:01:29.561 ******* 2026-02-15 04:47:07.357966 | orchestrator | 2026-02-15 04:47:07.357976 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-15 04:47:07.357992 | orchestrator | Sunday 15 February 2026 04:47:07 +0000 (0:00:00.073) 0:01:29.634 ******* 2026-02-15 04:48:43.971869 | orchestrator | 2026-02-15 04:48:43.972013 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-15 04:48:43.972045 | orchestrator | Sunday 15 February 2026 04:47:07 +0000 (0:00:00.064) 0:01:29.699 ******* 2026-02-15 04:48:43.972130 | orchestrator | 2026-02-15 04:48:43.972148 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-15 04:48:43.972160 | orchestrator | Sunday 15 February 2026 04:47:07 +0000 (0:00:00.068) 0:01:29.768 ******* 2026-02-15 04:48:43.972171 | orchestrator | 2026-02-15 04:48:43.972182 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-02-15 04:48:43.972193 | orchestrator | Sunday 15 February 2026 04:47:07 +0000 (0:00:00.065) 0:01:29.834 ******* 2026-02-15 04:48:43.972204 | orchestrator | 2026-02-15 04:48:43.972215 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-02-15 04:48:43.972226 | orchestrator | Sunday 15 February 2026 04:47:07 +0000 (0:00:00.094) 0:01:29.928 ******* 2026-02-15 04:48:43.972237 | orchestrator | changed: [testbed-manager] 2026-02-15 04:48:43.972249 | orchestrator | 2026-02-15 04:48:43.972260 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-02-15 04:48:43.972284 | orchestrator | Sunday 15 February 2026 04:47:29 +0000 (0:00:22.429) 0:01:52.358 ******* 2026-02-15 04:48:43.972322 | orchestrator | changed: [testbed-manager] 2026-02-15 04:48:43.972334 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:48:43.972346 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:48:43.972357 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:48:43.972368 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:48:43.972379 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:48:43.972389 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:48:43.972401 | orchestrator | 2026-02-15 04:48:43.972414 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-02-15 04:48:43.972427 | orchestrator | Sunday 15 February 2026 04:47:43 +0000 (0:00:13.549) 0:02:05.907 ******* 2026-02-15 04:48:43.972439 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:48:43.972452 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:48:43.972464 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:48:43.972476 | orchestrator | 2026-02-15 04:48:43.972489 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-02-15 04:48:43.972502 | orchestrator | Sunday 15 February 2026 04:47:49 +0000 (0:00:05.909) 0:02:11.816 ******* 2026-02-15 04:48:43.972515 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:48:43.972527 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:48:43.972540 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:48:43.972552 | orchestrator | 2026-02-15 04:48:43.972564 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-02-15 04:48:43.972575 | orchestrator | Sunday 15 February 2026 04:47:59 +0000 (0:00:10.619) 0:02:22.436 ******* 2026-02-15 04:48:43.972601 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:48:43.972613 | orchestrator | changed: [testbed-manager] 2026-02-15 04:48:43.972624 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:48:43.972635 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:48:43.972645 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:48:43.972656 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:48:43.972666 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:48:43.972677 | orchestrator | 2026-02-15 04:48:43.972688 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-02-15 04:48:43.972698 | orchestrator | Sunday 15 February 2026 04:48:13 +0000 (0:00:13.490) 0:02:35.927 ******* 2026-02-15 04:48:43.972709 | orchestrator | changed: [testbed-manager] 2026-02-15 04:48:43.972719 | orchestrator | 2026-02-15 04:48:43.972730 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-02-15 04:48:43.972741 | orchestrator | Sunday 15 February 2026 04:48:22 +0000 (0:00:09.037) 0:02:44.964 ******* 2026-02-15 04:48:43.972752 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:48:43.972763 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:48:43.972773 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:48:43.972784 | orchestrator | 2026-02-15 04:48:43.972795 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-02-15 04:48:43.972805 | orchestrator | Sunday 15 February 2026 04:48:33 +0000 (0:00:10.659) 0:02:55.624 ******* 2026-02-15 04:48:43.972816 | orchestrator | changed: [testbed-manager] 2026-02-15 04:48:43.972827 | orchestrator | 2026-02-15 04:48:43.972837 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-02-15 04:48:43.972848 | orchestrator | Sunday 15 February 2026 04:48:38 +0000 (0:00:05.428) 0:03:01.052 ******* 2026-02-15 04:48:43.972859 | orchestrator | changed: [testbed-node-3] 2026-02-15 04:48:43.972869 | orchestrator | changed: [testbed-node-4] 2026-02-15 04:48:43.972880 | orchestrator | changed: [testbed-node-5] 2026-02-15 04:48:43.972890 | orchestrator | 2026-02-15 04:48:43.972901 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:48:43.972914 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-15 04:48:43.972926 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-15 04:48:43.972944 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-15 04:48:43.972955 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-02-15 04:48:43.972966 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-15 04:48:43.972997 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-15 04:48:43.973009 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-02-15 04:48:43.973020 | orchestrator | 2026-02-15 04:48:43.973031 | orchestrator | 2026-02-15 04:48:43.973042 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:48:43.973081 | orchestrator | Sunday 15 February 2026 04:48:43 +0000 (0:00:04.988) 0:03:06.041 ******* 2026-02-15 04:48:43.973092 | orchestrator | =============================================================================== 2026-02-15 04:48:43.973103 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.03s 2026-02-15 04:48:43.973114 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 22.43s 2026-02-15 04:48:43.973124 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.81s 2026-02-15 04:48:43.973135 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.55s 2026-02-15 04:48:43.973146 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.49s 2026-02-15 04:48:43.973156 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.66s 2026-02-15 04:48:43.973167 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.62s 2026-02-15 04:48:43.973178 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.04s 2026-02-15 04:48:43.973197 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.45s 2026-02-15 04:48:43.973215 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 5.91s 2026-02-15 04:48:43.973234 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.50s 2026-02-15 04:48:43.973252 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.43s 2026-02-15 04:48:43.973272 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 4.99s 2026-02-15 04:48:43.973283 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.18s 2026-02-15 04:48:43.973294 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.04s 2026-02-15 04:48:43.973304 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.98s 2026-02-15 04:48:43.973315 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 2.21s 2026-02-15 04:48:43.973332 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.10s 2026-02-15 04:48:43.973343 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.82s 2026-02-15 04:48:43.973354 | orchestrator | prometheus : include_tasks ---------------------------------------------- 1.77s 2026-02-15 04:48:48.834231 | orchestrator | 2026-02-15 04:48:48 | INFO  | Task 0ed9ec15-76c4-45bf-868c-05a3b3cd5222 (grafana) was prepared for execution. 2026-02-15 04:48:48.834332 | orchestrator | 2026-02-15 04:48:48 | INFO  | It takes a moment until task 0ed9ec15-76c4-45bf-868c-05a3b3cd5222 (grafana) has been started and output is visible here. 2026-02-15 04:48:58.574314 | orchestrator | 2026-02-15 04:48:58.574436 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:48:58.574453 | orchestrator | 2026-02-15 04:48:58.574548 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:48:58.574573 | orchestrator | Sunday 15 February 2026 04:48:52 +0000 (0:00:00.257) 0:00:00.257 ******* 2026-02-15 04:48:58.574587 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:48:58.574600 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:48:58.574611 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:48:58.574622 | orchestrator | 2026-02-15 04:48:58.574633 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:48:58.574644 | orchestrator | Sunday 15 February 2026 04:48:53 +0000 (0:00:00.343) 0:00:00.601 ******* 2026-02-15 04:48:58.574655 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-02-15 04:48:58.574666 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-02-15 04:48:58.574747 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-02-15 04:48:58.574770 | orchestrator | 2026-02-15 04:48:58.574789 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-02-15 04:48:58.574808 | orchestrator | 2026-02-15 04:48:58.574820 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-15 04:48:58.574831 | orchestrator | Sunday 15 February 2026 04:48:53 +0000 (0:00:00.447) 0:00:01.048 ******* 2026-02-15 04:48:58.574843 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:48:58.574854 | orchestrator | 2026-02-15 04:48:58.574865 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-02-15 04:48:58.574876 | orchestrator | Sunday 15 February 2026 04:48:54 +0000 (0:00:00.598) 0:00:01.647 ******* 2026-02-15 04:48:58.574890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:48:58.574908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:48:58.574929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:48:58.574946 | orchestrator | 2026-02-15 04:48:58.574958 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-02-15 04:48:58.574969 | orchestrator | Sunday 15 February 2026 04:48:55 +0000 (0:00:00.918) 0:00:02.566 ******* 2026-02-15 04:48:58.574991 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-02-15 04:48:58.575002 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-02-15 04:48:58.575028 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:48:58.575039 | orchestrator | 2026-02-15 04:48:58.575050 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-02-15 04:48:58.575097 | orchestrator | Sunday 15 February 2026 04:48:56 +0000 (0:00:00.842) 0:00:03.408 ******* 2026-02-15 04:48:58.575117 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:48:58.575136 | orchestrator | 2026-02-15 04:48:58.575155 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-02-15 04:48:58.575176 | orchestrator | Sunday 15 February 2026 04:48:56 +0000 (0:00:00.576) 0:00:03.984 ******* 2026-02-15 04:48:58.575219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:48:58.575233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:48:58.575245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:48:58.575256 | orchestrator | 2026-02-15 04:48:58.575267 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-02-15 04:48:58.575278 | orchestrator | Sunday 15 February 2026 04:48:58 +0000 (0:00:01.330) 0:00:05.314 ******* 2026-02-15 04:48:58.575289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-15 04:48:58.575305 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:48:58.575343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-15 04:48:58.575365 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:48:58.575398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-15 04:49:05.521859 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:49:05.521995 | orchestrator | 2026-02-15 04:49:05.522093 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-02-15 04:49:05.522111 | orchestrator | Sunday 15 February 2026 04:48:58 +0000 (0:00:00.556) 0:00:05.870 ******* 2026-02-15 04:49:05.522125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-15 04:49:05.522140 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:49:05.522152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-15 04:49:05.522164 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:49:05.522175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-02-15 04:49:05.522211 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:49:05.522223 | orchestrator | 2026-02-15 04:49:05.522234 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-02-15 04:49:05.522245 | orchestrator | Sunday 15 February 2026 04:48:59 +0000 (0:00:00.646) 0:00:06.517 ******* 2026-02-15 04:49:05.522257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:49:05.522284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:49:05.522315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:49:05.522328 | orchestrator | 2026-02-15 04:49:05.522339 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-02-15 04:49:05.522350 | orchestrator | Sunday 15 February 2026 04:49:00 +0000 (0:00:01.312) 0:00:07.829 ******* 2026-02-15 04:49:05.522361 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:49:05.522373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:49:05.522392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:49:05.522406 | orchestrator | 2026-02-15 04:49:05.522419 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-02-15 04:49:05.522432 | orchestrator | Sunday 15 February 2026 04:49:02 +0000 (0:00:01.635) 0:00:09.464 ******* 2026-02-15 04:49:05.522445 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:49:05.522459 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:49:05.522472 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:49:05.522485 | orchestrator | 2026-02-15 04:49:05.522498 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-02-15 04:49:05.522511 | orchestrator | Sunday 15 February 2026 04:49:02 +0000 (0:00:00.330) 0:00:09.794 ******* 2026-02-15 04:49:05.522524 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-15 04:49:05.522538 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-15 04:49:05.522556 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-02-15 04:49:05.522570 | orchestrator | 2026-02-15 04:49:05.522584 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-02-15 04:49:05.522597 | orchestrator | Sunday 15 February 2026 04:49:03 +0000 (0:00:01.287) 0:00:11.082 ******* 2026-02-15 04:49:05.522610 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-15 04:49:05.522624 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-15 04:49:05.522637 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-02-15 04:49:05.522650 | orchestrator | 2026-02-15 04:49:05.522664 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-02-15 04:49:05.522684 | orchestrator | Sunday 15 February 2026 04:49:05 +0000 (0:00:01.728) 0:00:12.810 ******* 2026-02-15 04:49:12.173499 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:49:12.173613 | orchestrator | 2026-02-15 04:49:12.173630 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-02-15 04:49:12.173644 | orchestrator | Sunday 15 February 2026 04:49:06 +0000 (0:00:00.799) 0:00:13.610 ******* 2026-02-15 04:49:12.173655 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-02-15 04:49:12.173667 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-02-15 04:49:12.173678 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:49:12.173690 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:49:12.173701 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:49:12.173712 | orchestrator | 2026-02-15 04:49:12.173725 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-02-15 04:49:12.173736 | orchestrator | Sunday 15 February 2026 04:49:07 +0000 (0:00:00.715) 0:00:14.326 ******* 2026-02-15 04:49:12.173747 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:49:12.173758 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:49:12.173769 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:49:12.173780 | orchestrator | 2026-02-15 04:49:12.173791 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-02-15 04:49:12.173830 | orchestrator | Sunday 15 February 2026 04:49:07 +0000 (0:00:00.353) 0:00:14.679 ******* 2026-02-15 04:49:12.173845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1091466, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1247814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.173861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1091466, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1247814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.173873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1091466, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1247814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.173899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1091723, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1897552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.173930 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1091723, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1897552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.173945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1091723, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1897552, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.173966 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1091481, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.129634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.173980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1091481, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.129634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.173993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1091481, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.129634, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.174006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1091724, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1907823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.174107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1091724, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1907823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:12.174133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1091724, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1907823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.042927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1091499, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.136555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1091499, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.136555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043143 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1091499, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.136555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1091714, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1877823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1091714, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1877823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1091714, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1877823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1091463, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1238408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1091463, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1238408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1091463, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1238408, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1091470, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1257813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1091470, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1257813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1091470, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1257813, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:16.043331 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1091483, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1297815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1091483, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1297815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1091483, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1297815, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997487 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1091702, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1839297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1091702, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1839297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1091702, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1839297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1091720, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1887825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1091720, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1887825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1091720, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1887825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1091475, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1277814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1091475, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1277814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1091475, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1277814, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1091709, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1857824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:19.997615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1091709, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1857824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1091709, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1857824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329673 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1091504, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1839297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329688 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1091504, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1839297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1091504, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1839297, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1091497, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1347816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1091497, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1347816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1091497, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1347816, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1091493, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.134651, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1091493, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.134651, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1091493, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.134651, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1091704, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1856246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1091704, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1856246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:24.329889 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1091704, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1856246, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1091485, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1328847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1091485, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1328847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1091485, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1328847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1091718, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1887152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1091718, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1887152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1091718, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1887152, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092651, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3957858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092651, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3957858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1092651, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3957858, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1091856, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.232783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1091856, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.232783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1091856, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.232783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:28.068602 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1091745, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1947825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1091745, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1947825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1091745, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1947825, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507410 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092553, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3637853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092553, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3637853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1091734, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1927824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1092553, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3637853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1091734, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1927824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092621, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3867857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092621, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3867857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1091734, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1927824, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092576, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3827856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092576, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3827856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:32.507661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1092621, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3867857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.327857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092623, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3877857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.327961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092623, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3877857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1092576, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3827856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092645, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3948221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092645, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3948221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1092623, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3877857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092617, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3857856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092617, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3857856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1092645, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3948221, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1091874, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.235783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1091874, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.235783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1092617, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3857856, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:36.328282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1091850, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.2218993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.297705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1091850, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.2218993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.297850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1091874, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.235783, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.297882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1091872, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.2347832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.297895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1091872, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.2347832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.297907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1091850, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.2218993, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.297919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1091748, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.2197828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.297949 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1091748, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.2197828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.297970 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1091872, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.2347832, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.297987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1091875, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3637853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.298000 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1091875, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3637853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.298012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1091748, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.2197828, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.298118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092636, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3937857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:40.298142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092636, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3937857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.306711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1091875, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3637853, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.306842 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092630, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3907857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.306869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092630, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3907857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.306891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1092636, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3937857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.306913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1091737, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1934261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.306926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1091737, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1934261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.306977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1092630, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3907857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.306997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1091741, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1944315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.307009 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1091741, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1944315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.307020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1091737, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1934261, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.307032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092608, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3847857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.307044 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092608, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3847857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:49:44.307070 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1091741, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.1944315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:51:20.863345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092628, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3888717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:51:20.863466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092628, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3888717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:51:20.863532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1092608, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3847857, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:51:20.863547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1092628, 'dev': 167, 'nlink': 1, 'atime': 1764530892.0, 'mtime': 1764530892.0, 'ctime': 1771123505.3888717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-02-15 04:51:20.863559 | orchestrator | 2026-02-15 04:51:20.863573 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-02-15 04:51:20.863610 | orchestrator | Sunday 15 February 2026 04:49:46 +0000 (0:00:38.965) 0:00:53.644 ******* 2026-02-15 04:51:20.863623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:51:20.863652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:51:20.863672 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.3.0.20251130', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-02-15 04:51:20.863684 | orchestrator | 2026-02-15 04:51:20.863695 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-02-15 04:51:20.863706 | orchestrator | Sunday 15 February 2026 04:49:47 +0000 (0:00:01.038) 0:00:54.683 ******* 2026-02-15 04:51:20.863717 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:51:20.863730 | orchestrator | 2026-02-15 04:51:20.863741 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-02-15 04:51:20.863752 | orchestrator | Sunday 15 February 2026 04:49:49 +0000 (0:00:02.350) 0:00:57.034 ******* 2026-02-15 04:51:20.863763 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:51:20.863774 | orchestrator | 2026-02-15 04:51:20.863785 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-15 04:51:20.863797 | orchestrator | Sunday 15 February 2026 04:49:52 +0000 (0:00:02.403) 0:00:59.437 ******* 2026-02-15 04:51:20.863810 | orchestrator | 2026-02-15 04:51:20.863822 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-15 04:51:20.863834 | orchestrator | Sunday 15 February 2026 04:49:52 +0000 (0:00:00.071) 0:00:59.509 ******* 2026-02-15 04:51:20.863847 | orchestrator | 2026-02-15 04:51:20.863859 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-02-15 04:51:20.863872 | orchestrator | Sunday 15 February 2026 04:49:52 +0000 (0:00:00.070) 0:00:59.579 ******* 2026-02-15 04:51:20.863884 | orchestrator | 2026-02-15 04:51:20.863896 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-02-15 04:51:20.863908 | orchestrator | Sunday 15 February 2026 04:49:52 +0000 (0:00:00.072) 0:00:59.651 ******* 2026-02-15 04:51:20.863921 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:51:20.863933 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:51:20.863945 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:51:20.863965 | orchestrator | 2026-02-15 04:51:20.863978 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-02-15 04:51:20.863991 | orchestrator | Sunday 15 February 2026 04:49:54 +0000 (0:00:02.201) 0:01:01.853 ******* 2026-02-15 04:51:20.864003 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:51:20.864015 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:51:20.864027 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-02-15 04:51:20.864042 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-02-15 04:51:20.864054 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2026-02-15 04:51:20.864067 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (9 retries left). 2026-02-15 04:51:20.864079 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:51:20.864092 | orchestrator | 2026-02-15 04:51:20.864104 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-02-15 04:51:20.864117 | orchestrator | Sunday 15 February 2026 04:50:45 +0000 (0:00:50.901) 0:01:52.754 ******* 2026-02-15 04:51:20.864130 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:51:20.864168 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:51:20.864181 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:51:20.864191 | orchestrator | 2026-02-15 04:51:20.864202 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-02-15 04:51:20.864213 | orchestrator | Sunday 15 February 2026 04:51:15 +0000 (0:00:30.073) 0:02:22.828 ******* 2026-02-15 04:51:20.864224 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:51:20.864235 | orchestrator | 2026-02-15 04:51:20.864246 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-02-15 04:51:20.864262 | orchestrator | Sunday 15 February 2026 04:51:17 +0000 (0:00:02.329) 0:02:25.157 ******* 2026-02-15 04:51:20.864280 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:51:20.864299 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:51:20.864327 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:51:20.864347 | orchestrator | 2026-02-15 04:51:20.864364 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-02-15 04:51:20.864380 | orchestrator | Sunday 15 February 2026 04:51:18 +0000 (0:00:00.309) 0:02:25.467 ******* 2026-02-15 04:51:20.864399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-02-15 04:51:20.864431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-02-15 04:51:21.474368 | orchestrator | 2026-02-15 04:51:21.474474 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-02-15 04:51:21.474490 | orchestrator | Sunday 15 February 2026 04:51:20 +0000 (0:00:02.682) 0:02:28.150 ******* 2026-02-15 04:51:21.474502 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:51:21.474514 | orchestrator | 2026-02-15 04:51:21.474525 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:51:21.474538 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 04:51:21.474569 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 04:51:21.474581 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 04:51:21.474613 | orchestrator | 2026-02-15 04:51:21.474625 | orchestrator | 2026-02-15 04:51:21.474636 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:51:21.474647 | orchestrator | Sunday 15 February 2026 04:51:21 +0000 (0:00:00.281) 0:02:28.431 ******* 2026-02-15 04:51:21.474658 | orchestrator | =============================================================================== 2026-02-15 04:51:21.474670 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 50.90s 2026-02-15 04:51:21.474680 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 38.97s 2026-02-15 04:51:21.474691 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 30.07s 2026-02-15 04:51:21.474702 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.68s 2026-02-15 04:51:21.474713 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.40s 2026-02-15 04:51:21.474723 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.35s 2026-02-15 04:51:21.474734 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.33s 2026-02-15 04:51:21.474745 | orchestrator | grafana : Restart first grafana container ------------------------------- 2.20s 2026-02-15 04:51:21.474755 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.73s 2026-02-15 04:51:21.474766 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.64s 2026-02-15 04:51:21.474777 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.33s 2026-02-15 04:51:21.474787 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.31s 2026-02-15 04:51:21.474798 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.29s 2026-02-15 04:51:21.474809 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.04s 2026-02-15 04:51:21.474820 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.92s 2026-02-15 04:51:21.474830 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.84s 2026-02-15 04:51:21.474841 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.80s 2026-02-15 04:51:21.474852 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.72s 2026-02-15 04:51:21.474863 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.65s 2026-02-15 04:51:21.474873 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.60s 2026-02-15 04:51:21.773934 | orchestrator | + sh -c /opt/configuration/scripts/deploy/510-clusterapi.sh 2026-02-15 04:51:21.781816 | orchestrator | + set -e 2026-02-15 04:51:21.781904 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 04:51:21.782823 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 04:51:21.782852 | orchestrator | ++ INTERACTIVE=false 2026-02-15 04:51:21.782864 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 04:51:21.782875 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 04:51:21.782886 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 04:51:21.783867 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 04:51:21.783890 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 04:51:21.783901 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 04:51:21.783912 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 04:51:21.783924 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 04:51:21.783936 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 04:51:21.783948 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 04:51:21.783959 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 04:51:21.783970 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 04:51:21.783981 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 04:51:21.783993 | orchestrator | ++ export ARA=false 2026-02-15 04:51:21.784004 | orchestrator | ++ ARA=false 2026-02-15 04:51:21.784015 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 04:51:21.784026 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 04:51:21.784037 | orchestrator | ++ export TEMPEST=false 2026-02-15 04:51:21.784048 | orchestrator | ++ TEMPEST=false 2026-02-15 04:51:21.784058 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 04:51:21.784094 | orchestrator | ++ IS_ZUUL=true 2026-02-15 04:51:21.784105 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 04:51:21.784116 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 04:51:21.784127 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 04:51:21.784138 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 04:51:21.784174 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 04:51:21.784185 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 04:51:21.784196 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 04:51:21.784207 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 04:51:21.784218 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 04:51:21.784229 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 04:51:21.785040 | orchestrator | ++ semver 9.5.0 8.0.0 2026-02-15 04:51:21.855122 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-15 04:51:21.855279 | orchestrator | + osism apply clusterapi 2026-02-15 04:51:24.022270 | orchestrator | 2026-02-15 04:51:24 | INFO  | Task 42b82497-71ee-4420-9a04-e92528a42304 (clusterapi) was prepared for execution. 2026-02-15 04:51:24.022358 | orchestrator | 2026-02-15 04:51:24 | INFO  | It takes a moment until task 42b82497-71ee-4420-9a04-e92528a42304 (clusterapi) has been started and output is visible here. 2026-02-15 04:52:18.522767 | orchestrator | 2026-02-15 04:52:18.522915 | orchestrator | PLAY [Apply cert_manager role] ************************************************* 2026-02-15 04:52:18.522941 | orchestrator | 2026-02-15 04:52:18.522961 | orchestrator | TASK [Include cert_manager role] *********************************************** 2026-02-15 04:52:18.522980 | orchestrator | Sunday 15 February 2026 04:51:28 +0000 (0:00:00.185) 0:00:00.186 ******* 2026-02-15 04:52:18.523000 | orchestrator | included: cert_manager for testbed-manager 2026-02-15 04:52:18.523019 | orchestrator | 2026-02-15 04:52:18.523037 | orchestrator | TASK [cert_manager : Deploy cert-manager crds] ********************************* 2026-02-15 04:52:18.523056 | orchestrator | Sunday 15 February 2026 04:51:28 +0000 (0:00:00.234) 0:00:00.420 ******* 2026-02-15 04:52:18.523074 | orchestrator | changed: [testbed-manager] 2026-02-15 04:52:18.523094 | orchestrator | 2026-02-15 04:52:18.523136 | orchestrator | TASK [cert_manager : Deploy cert-manager] ************************************** 2026-02-15 04:52:18.523156 | orchestrator | Sunday 15 February 2026 04:51:34 +0000 (0:00:05.469) 0:00:05.890 ******* 2026-02-15 04:52:18.523174 | orchestrator | changed: [testbed-manager] 2026-02-15 04:52:18.523289 | orchestrator | 2026-02-15 04:52:18.523309 | orchestrator | PLAY [Initialize or upgrade the CAPI management cluster] *********************** 2026-02-15 04:52:18.523331 | orchestrator | 2026-02-15 04:52:18.523351 | orchestrator | TASK [Get capi-system namespace phase] ***************************************** 2026-02-15 04:52:18.523369 | orchestrator | Sunday 15 February 2026 04:51:57 +0000 (0:00:23.624) 0:00:29.514 ******* 2026-02-15 04:52:18.523387 | orchestrator | ok: [testbed-manager] 2026-02-15 04:52:18.523406 | orchestrator | 2026-02-15 04:52:18.523424 | orchestrator | TASK [Set capi-system-phase fact] ********************************************** 2026-02-15 04:52:18.523442 | orchestrator | Sunday 15 February 2026 04:51:58 +0000 (0:00:01.109) 0:00:30.623 ******* 2026-02-15 04:52:18.523459 | orchestrator | ok: [testbed-manager] 2026-02-15 04:52:18.523478 | orchestrator | 2026-02-15 04:52:18.523496 | orchestrator | TASK [Initialize the CAPI management cluster] ********************************** 2026-02-15 04:52:18.523513 | orchestrator | Sunday 15 February 2026 04:51:58 +0000 (0:00:00.144) 0:00:30.768 ******* 2026-02-15 04:52:18.523532 | orchestrator | ok: [testbed-manager] 2026-02-15 04:52:18.523551 | orchestrator | 2026-02-15 04:52:18.523569 | orchestrator | TASK [Upgrade the CAPI management cluster] ************************************* 2026-02-15 04:52:18.523589 | orchestrator | Sunday 15 February 2026 04:52:15 +0000 (0:00:16.596) 0:00:47.364 ******* 2026-02-15 04:52:18.523608 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:52:18.523627 | orchestrator | 2026-02-15 04:52:18.523642 | orchestrator | TASK [Install openstack-resource-controller] *********************************** 2026-02-15 04:52:18.523653 | orchestrator | Sunday 15 February 2026 04:52:15 +0000 (0:00:00.157) 0:00:47.521 ******* 2026-02-15 04:52:18.523664 | orchestrator | changed: [testbed-manager] 2026-02-15 04:52:18.523676 | orchestrator | 2026-02-15 04:52:18.523687 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:52:18.523730 | orchestrator | testbed-manager : ok=7  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 04:52:18.523743 | orchestrator | 2026-02-15 04:52:18.523754 | orchestrator | 2026-02-15 04:52:18.523765 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:52:18.523777 | orchestrator | Sunday 15 February 2026 04:52:18 +0000 (0:00:02.474) 0:00:49.996 ******* 2026-02-15 04:52:18.523788 | orchestrator | =============================================================================== 2026-02-15 04:52:18.523798 | orchestrator | cert_manager : Deploy cert-manager ------------------------------------- 23.62s 2026-02-15 04:52:18.523810 | orchestrator | Initialize the CAPI management cluster --------------------------------- 16.60s 2026-02-15 04:52:18.523820 | orchestrator | cert_manager : Deploy cert-manager crds --------------------------------- 5.47s 2026-02-15 04:52:18.523831 | orchestrator | Install openstack-resource-controller ----------------------------------- 2.48s 2026-02-15 04:52:18.523844 | orchestrator | Get capi-system namespace phase ----------------------------------------- 1.11s 2026-02-15 04:52:18.523862 | orchestrator | Include cert_manager role ----------------------------------------------- 0.23s 2026-02-15 04:52:18.523877 | orchestrator | Upgrade the CAPI management cluster ------------------------------------- 0.16s 2026-02-15 04:52:18.523888 | orchestrator | Set capi-system-phase fact ---------------------------------------------- 0.14s 2026-02-15 04:52:18.855401 | orchestrator | + osism apply magnum 2026-02-15 04:52:20.963693 | orchestrator | 2026-02-15 04:52:20 | INFO  | Task 85982e74-87e5-445c-85e6-f0548d67488d (magnum) was prepared for execution. 2026-02-15 04:52:20.963799 | orchestrator | 2026-02-15 04:52:20 | INFO  | It takes a moment until task 85982e74-87e5-445c-85e6-f0548d67488d (magnum) has been started and output is visible here. 2026-02-15 04:53:04.717140 | orchestrator | 2026-02-15 04:53:04.717251 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 04:53:04.717263 | orchestrator | 2026-02-15 04:53:04.717271 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 04:53:04.717279 | orchestrator | Sunday 15 February 2026 04:52:25 +0000 (0:00:00.266) 0:00:00.266 ******* 2026-02-15 04:53:04.717287 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:53:04.717295 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:53:04.717302 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:53:04.717308 | orchestrator | 2026-02-15 04:53:04.717315 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 04:53:04.717322 | orchestrator | Sunday 15 February 2026 04:52:25 +0000 (0:00:00.337) 0:00:00.603 ******* 2026-02-15 04:53:04.717329 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-02-15 04:53:04.717336 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-02-15 04:53:04.717343 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-02-15 04:53:04.717349 | orchestrator | 2026-02-15 04:53:04.717356 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-02-15 04:53:04.717363 | orchestrator | 2026-02-15 04:53:04.717370 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-15 04:53:04.717377 | orchestrator | Sunday 15 February 2026 04:52:26 +0000 (0:00:00.446) 0:00:01.050 ******* 2026-02-15 04:53:04.717383 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:53:04.717391 | orchestrator | 2026-02-15 04:53:04.717397 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-02-15 04:53:04.717404 | orchestrator | Sunday 15 February 2026 04:52:26 +0000 (0:00:00.580) 0:00:01.631 ******* 2026-02-15 04:53:04.717411 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-02-15 04:53:04.717418 | orchestrator | 2026-02-15 04:53:04.717425 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-02-15 04:53:04.717444 | orchestrator | Sunday 15 February 2026 04:52:30 +0000 (0:00:03.684) 0:00:05.315 ******* 2026-02-15 04:53:04.717470 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-02-15 04:53:04.717478 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-02-15 04:53:04.717485 | orchestrator | 2026-02-15 04:53:04.717492 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-02-15 04:53:04.717553 | orchestrator | Sunday 15 February 2026 04:52:37 +0000 (0:00:06.825) 0:00:12.140 ******* 2026-02-15 04:53:04.717560 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-02-15 04:53:04.717567 | orchestrator | 2026-02-15 04:53:04.717574 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-02-15 04:53:04.717581 | orchestrator | Sunday 15 February 2026 04:52:40 +0000 (0:00:03.696) 0:00:15.837 ******* 2026-02-15 04:53:04.717588 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-02-15 04:53:04.717595 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-02-15 04:53:04.717601 | orchestrator | 2026-02-15 04:53:04.717608 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-02-15 04:53:04.717615 | orchestrator | Sunday 15 February 2026 04:52:44 +0000 (0:00:03.924) 0:00:19.762 ******* 2026-02-15 04:53:04.717622 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-02-15 04:53:04.717628 | orchestrator | 2026-02-15 04:53:04.717635 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-02-15 04:53:04.717642 | orchestrator | Sunday 15 February 2026 04:52:48 +0000 (0:00:03.299) 0:00:23.061 ******* 2026-02-15 04:53:04.717649 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-02-15 04:53:04.717655 | orchestrator | 2026-02-15 04:53:04.717662 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-02-15 04:53:04.717669 | orchestrator | Sunday 15 February 2026 04:52:52 +0000 (0:00:03.963) 0:00:27.025 ******* 2026-02-15 04:53:04.717676 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:53:04.717682 | orchestrator | 2026-02-15 04:53:04.717689 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-02-15 04:53:04.717696 | orchestrator | Sunday 15 February 2026 04:52:55 +0000 (0:00:03.528) 0:00:30.554 ******* 2026-02-15 04:53:04.717704 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:53:04.717713 | orchestrator | 2026-02-15 04:53:04.717720 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-02-15 04:53:04.717728 | orchestrator | Sunday 15 February 2026 04:52:59 +0000 (0:00:03.980) 0:00:34.534 ******* 2026-02-15 04:53:04.717736 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:53:04.717744 | orchestrator | 2026-02-15 04:53:04.717752 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-02-15 04:53:04.717760 | orchestrator | Sunday 15 February 2026 04:53:03 +0000 (0:00:03.528) 0:00:38.063 ******* 2026-02-15 04:53:04.717787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:04.717799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:04.717818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:04.717828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:04.717837 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:04.717850 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:12.404868 | orchestrator | 2026-02-15 04:53:12.404997 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-02-15 04:53:12.405016 | orchestrator | Sunday 15 February 2026 04:53:04 +0000 (0:00:01.627) 0:00:39.690 ******* 2026-02-15 04:53:12.405028 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:53:12.405040 | orchestrator | 2026-02-15 04:53:12.405051 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-02-15 04:53:12.405063 | orchestrator | Sunday 15 February 2026 04:53:04 +0000 (0:00:00.141) 0:00:39.832 ******* 2026-02-15 04:53:12.405074 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:53:12.405085 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:53:12.405095 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:53:12.405106 | orchestrator | 2026-02-15 04:53:12.405117 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-02-15 04:53:12.405128 | orchestrator | Sunday 15 February 2026 04:53:05 +0000 (0:00:00.310) 0:00:40.142 ******* 2026-02-15 04:53:12.405139 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 04:53:12.405150 | orchestrator | 2026-02-15 04:53:12.405161 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-02-15 04:53:12.405172 | orchestrator | Sunday 15 February 2026 04:53:05 +0000 (0:00:00.832) 0:00:40.974 ******* 2026-02-15 04:53:12.405201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:12.405274 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:12.405287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:12.405328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:12.405342 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:12.405360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:12.405372 | orchestrator | 2026-02-15 04:53:12.405383 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-02-15 04:53:12.405394 | orchestrator | Sunday 15 February 2026 04:53:08 +0000 (0:00:02.403) 0:00:43.377 ******* 2026-02-15 04:53:12.405405 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:53:12.405419 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:53:12.405431 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:53:12.405444 | orchestrator | 2026-02-15 04:53:12.405458 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-15 04:53:12.405471 | orchestrator | Sunday 15 February 2026 04:53:09 +0000 (0:00:00.626) 0:00:44.004 ******* 2026-02-15 04:53:12.405484 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 04:53:12.405497 | orchestrator | 2026-02-15 04:53:12.405510 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-02-15 04:53:12.405523 | orchestrator | Sunday 15 February 2026 04:53:09 +0000 (0:00:00.633) 0:00:44.637 ******* 2026-02-15 04:53:12.405537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:12.405565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:13.427454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:13.427580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:13.427597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:13.427609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:13.427642 | orchestrator | 2026-02-15 04:53:13.427657 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-02-15 04:53:13.427670 | orchestrator | Sunday 15 February 2026 04:53:12 +0000 (0:00:02.746) 0:00:47.383 ******* 2026-02-15 04:53:13.427699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 04:53:13.427713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:53:13.427725 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:53:13.427744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 04:53:13.427757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:53:13.427777 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:53:13.427789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 04:53:13.427808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:53:17.055869 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:53:17.055976 | orchestrator | 2026-02-15 04:53:17.055991 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-02-15 04:53:17.056005 | orchestrator | Sunday 15 February 2026 04:53:13 +0000 (0:00:01.014) 0:00:48.398 ******* 2026-02-15 04:53:17.056037 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 04:53:17.056053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:53:17.056066 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:53:17.056078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 04:53:17.056116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:53:17.056128 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:53:17.056159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 04:53:17.056177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:53:17.056189 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:53:17.056200 | orchestrator | 2026-02-15 04:53:17.056212 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-02-15 04:53:17.056278 | orchestrator | Sunday 15 February 2026 04:53:14 +0000 (0:00:00.929) 0:00:49.327 ******* 2026-02-15 04:53:17.056291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:17.056312 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:17.056332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:23.418309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:23.418389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:23.418412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:23.418418 | orchestrator | 2026-02-15 04:53:23.418425 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-02-15 04:53:23.418432 | orchestrator | Sunday 15 February 2026 04:53:17 +0000 (0:00:02.707) 0:00:52.034 ******* 2026-02-15 04:53:23.418438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:23.418458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:23.418467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:23.418472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:23.418483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:23.418488 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:23.418494 | orchestrator | 2026-02-15 04:53:23.418499 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-02-15 04:53:23.418504 | orchestrator | Sunday 15 February 2026 04:53:22 +0000 (0:00:05.554) 0:00:57.589 ******* 2026-02-15 04:53:23.418514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 04:53:25.364863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:53:25.364990 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:53:25.365009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 04:53:25.365023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:53:25.365035 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:53:25.365047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-02-15 04:53:25.365076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 04:53:25.365088 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:53:25.365100 | orchestrator | 2026-02-15 04:53:25.365113 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-02-15 04:53:25.365126 | orchestrator | Sunday 15 February 2026 04:53:23 +0000 (0:00:00.814) 0:00:58.404 ******* 2026-02-15 04:53:25.365145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:25.365166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:25.365179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-02-15 04:53:25.365190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:53:25.365210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:54:25.198716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-02-15 04:54:25.198835 | orchestrator | 2026-02-15 04:54:25.198855 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-02-15 04:54:25.198871 | orchestrator | Sunday 15 February 2026 04:53:25 +0000 (0:00:01.940) 0:01:00.344 ******* 2026-02-15 04:54:25.198885 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:54:25.198900 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:54:25.198914 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:54:25.198927 | orchestrator | 2026-02-15 04:54:25.198941 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-02-15 04:54:25.198953 | orchestrator | Sunday 15 February 2026 04:53:25 +0000 (0:00:00.545) 0:01:00.889 ******* 2026-02-15 04:54:25.198967 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:54:25.198981 | orchestrator | 2026-02-15 04:54:25.198995 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-02-15 04:54:25.199008 | orchestrator | Sunday 15 February 2026 04:53:28 +0000 (0:00:02.401) 0:01:03.291 ******* 2026-02-15 04:54:25.199022 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:54:25.199036 | orchestrator | 2026-02-15 04:54:25.199050 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-02-15 04:54:25.199063 | orchestrator | Sunday 15 February 2026 04:53:30 +0000 (0:00:02.488) 0:01:05.780 ******* 2026-02-15 04:54:25.199077 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:54:25.199091 | orchestrator | 2026-02-15 04:54:25.199105 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-15 04:54:25.199118 | orchestrator | Sunday 15 February 2026 04:53:47 +0000 (0:00:16.859) 0:01:22.640 ******* 2026-02-15 04:54:25.199133 | orchestrator | 2026-02-15 04:54:25.199146 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-15 04:54:25.199159 | orchestrator | Sunday 15 February 2026 04:53:47 +0000 (0:00:00.072) 0:01:22.712 ******* 2026-02-15 04:54:25.199171 | orchestrator | 2026-02-15 04:54:25.199186 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-02-15 04:54:25.199199 | orchestrator | Sunday 15 February 2026 04:53:47 +0000 (0:00:00.073) 0:01:22.785 ******* 2026-02-15 04:54:25.199212 | orchestrator | 2026-02-15 04:54:25.199226 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-02-15 04:54:25.199240 | orchestrator | Sunday 15 February 2026 04:53:47 +0000 (0:00:00.076) 0:01:22.862 ******* 2026-02-15 04:54:25.199283 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:54:25.199301 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:54:25.199315 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:54:25.199329 | orchestrator | 2026-02-15 04:54:25.199344 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-02-15 04:54:25.199359 | orchestrator | Sunday 15 February 2026 04:54:08 +0000 (0:00:20.461) 0:01:43.323 ******* 2026-02-15 04:54:25.199373 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:54:25.199388 | orchestrator | changed: [testbed-node-1] 2026-02-15 04:54:25.199402 | orchestrator | changed: [testbed-node-2] 2026-02-15 04:54:25.199416 | orchestrator | 2026-02-15 04:54:25.199430 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:54:25.199445 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 04:54:25.199498 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 04:54:25.199513 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-02-15 04:54:25.199526 | orchestrator | 2026-02-15 04:54:25.199539 | orchestrator | 2026-02-15 04:54:25.199553 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:54:25.199566 | orchestrator | Sunday 15 February 2026 04:54:24 +0000 (0:00:16.463) 0:01:59.787 ******* 2026-02-15 04:54:25.199580 | orchestrator | =============================================================================== 2026-02-15 04:54:25.199594 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.46s 2026-02-15 04:54:25.199609 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 16.86s 2026-02-15 04:54:25.199623 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.46s 2026-02-15 04:54:25.199637 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.83s 2026-02-15 04:54:25.199651 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.55s 2026-02-15 04:54:25.199665 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.98s 2026-02-15 04:54:25.199679 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.96s 2026-02-15 04:54:25.199715 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.92s 2026-02-15 04:54:25.199740 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.70s 2026-02-15 04:54:25.199754 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.68s 2026-02-15 04:54:25.199767 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.53s 2026-02-15 04:54:25.199780 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.53s 2026-02-15 04:54:25.199794 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.30s 2026-02-15 04:54:25.199808 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.75s 2026-02-15 04:54:25.199821 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.71s 2026-02-15 04:54:25.199835 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.49s 2026-02-15 04:54:25.199849 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.40s 2026-02-15 04:54:25.199862 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.40s 2026-02-15 04:54:25.199876 | orchestrator | magnum : Check magnum containers ---------------------------------------- 1.94s 2026-02-15 04:54:25.199890 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.63s 2026-02-15 04:54:25.956455 | orchestrator | ok: Runtime: 1:45:26.653467 2026-02-15 04:54:26.229813 | 2026-02-15 04:54:26.230002 | TASK [Deploy in a nutshell] 2026-02-15 04:54:26.764753 | orchestrator | skipping: Conditional result was False 2026-02-15 04:54:26.788753 | 2026-02-15 04:54:26.788944 | TASK [Bootstrap services] 2026-02-15 04:54:27.432511 | orchestrator | 2026-02-15 04:54:27.432733 | orchestrator | # BOOTSTRAP 2026-02-15 04:54:27.432772 | orchestrator | 2026-02-15 04:54:27.432795 | orchestrator | + set -e 2026-02-15 04:54:27.432819 | orchestrator | + echo 2026-02-15 04:54:27.432834 | orchestrator | + echo '# BOOTSTRAP' 2026-02-15 04:54:27.432852 | orchestrator | + echo 2026-02-15 04:54:27.432901 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-02-15 04:54:27.440675 | orchestrator | + set -e 2026-02-15 04:54:27.440742 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-02-15 04:54:29.724425 | orchestrator | 2026-02-15 04:54:29 | INFO  | It takes a moment until task 85a4903e-2f8a-4bf4-80f0-7a386d77abfe (flavor-manager) has been started and output is visible here. 2026-02-15 04:54:37.896893 | orchestrator | 2026-02-15 04:54:32 | INFO  | Flavor SCS-1L-1 created 2026-02-15 04:54:37.897034 | orchestrator | 2026-02-15 04:54:33 | INFO  | Flavor SCS-1L-1-5 created 2026-02-15 04:54:37.897053 | orchestrator | 2026-02-15 04:54:33 | INFO  | Flavor SCS-1V-2 created 2026-02-15 04:54:37.897065 | orchestrator | 2026-02-15 04:54:33 | INFO  | Flavor SCS-1V-2-5 created 2026-02-15 04:54:37.897076 | orchestrator | 2026-02-15 04:54:33 | INFO  | Flavor SCS-1V-4 created 2026-02-15 04:54:37.897088 | orchestrator | 2026-02-15 04:54:33 | INFO  | Flavor SCS-1V-4-10 created 2026-02-15 04:54:37.897099 | orchestrator | 2026-02-15 04:54:34 | INFO  | Flavor SCS-1V-8 created 2026-02-15 04:54:37.897111 | orchestrator | 2026-02-15 04:54:34 | INFO  | Flavor SCS-1V-8-20 created 2026-02-15 04:54:37.897136 | orchestrator | 2026-02-15 04:54:34 | INFO  | Flavor SCS-2V-4 created 2026-02-15 04:54:37.897156 | orchestrator | 2026-02-15 04:54:34 | INFO  | Flavor SCS-2V-4-10 created 2026-02-15 04:54:37.897180 | orchestrator | 2026-02-15 04:54:34 | INFO  | Flavor SCS-2V-8 created 2026-02-15 04:54:37.897221 | orchestrator | 2026-02-15 04:54:34 | INFO  | Flavor SCS-2V-8-20 created 2026-02-15 04:54:37.897239 | orchestrator | 2026-02-15 04:54:35 | INFO  | Flavor SCS-2V-16 created 2026-02-15 04:54:37.897257 | orchestrator | 2026-02-15 04:54:35 | INFO  | Flavor SCS-2V-16-50 created 2026-02-15 04:54:37.897311 | orchestrator | 2026-02-15 04:54:35 | INFO  | Flavor SCS-4V-8 created 2026-02-15 04:54:37.897327 | orchestrator | 2026-02-15 04:54:35 | INFO  | Flavor SCS-4V-8-20 created 2026-02-15 04:54:37.897345 | orchestrator | 2026-02-15 04:54:35 | INFO  | Flavor SCS-4V-16 created 2026-02-15 04:54:37.897362 | orchestrator | 2026-02-15 04:54:35 | INFO  | Flavor SCS-4V-16-50 created 2026-02-15 04:54:37.897381 | orchestrator | 2026-02-15 04:54:36 | INFO  | Flavor SCS-4V-32 created 2026-02-15 04:54:37.897397 | orchestrator | 2026-02-15 04:54:36 | INFO  | Flavor SCS-4V-32-100 created 2026-02-15 04:54:37.897412 | orchestrator | 2026-02-15 04:54:36 | INFO  | Flavor SCS-8V-16 created 2026-02-15 04:54:37.897430 | orchestrator | 2026-02-15 04:54:36 | INFO  | Flavor SCS-8V-16-50 created 2026-02-15 04:54:37.897449 | orchestrator | 2026-02-15 04:54:36 | INFO  | Flavor SCS-8V-32 created 2026-02-15 04:54:37.897466 | orchestrator | 2026-02-15 04:54:36 | INFO  | Flavor SCS-8V-32-100 created 2026-02-15 04:54:37.897486 | orchestrator | 2026-02-15 04:54:37 | INFO  | Flavor SCS-16V-32 created 2026-02-15 04:54:37.897505 | orchestrator | 2026-02-15 04:54:37 | INFO  | Flavor SCS-16V-32-100 created 2026-02-15 04:54:37.897525 | orchestrator | 2026-02-15 04:54:37 | INFO  | Flavor SCS-2V-4-20s created 2026-02-15 04:54:37.897536 | orchestrator | 2026-02-15 04:54:37 | INFO  | Flavor SCS-4V-8-50s created 2026-02-15 04:54:37.897548 | orchestrator | 2026-02-15 04:54:37 | INFO  | Flavor SCS-8V-32-100s created 2026-02-15 04:54:40.243072 | orchestrator | 2026-02-15 04:54:40 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-02-15 04:54:50.382417 | orchestrator | 2026-02-15 04:54:50 | INFO  | Task a0e28999-6dbf-48c2-9c7f-f0955572795b (bootstrap-basic) was prepared for execution. 2026-02-15 04:54:50.382532 | orchestrator | 2026-02-15 04:54:50 | INFO  | It takes a moment until task a0e28999-6dbf-48c2-9c7f-f0955572795b (bootstrap-basic) has been started and output is visible here. 2026-02-15 04:55:35.334859 | orchestrator | 2026-02-15 04:55:35.334975 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-02-15 04:55:35.334992 | orchestrator | 2026-02-15 04:55:35.335004 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 04:55:35.335016 | orchestrator | Sunday 15 February 2026 04:54:54 +0000 (0:00:00.095) 0:00:00.095 ******* 2026-02-15 04:55:35.335027 | orchestrator | ok: [localhost] 2026-02-15 04:55:35.335038 | orchestrator | 2026-02-15 04:55:35.335049 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-02-15 04:55:35.335060 | orchestrator | Sunday 15 February 2026 04:54:56 +0000 (0:00:02.001) 0:00:02.096 ******* 2026-02-15 04:55:35.335070 | orchestrator | ok: [localhost] 2026-02-15 04:55:35.335081 | orchestrator | 2026-02-15 04:55:35.335092 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-02-15 04:55:35.335103 | orchestrator | Sunday 15 February 2026 04:55:04 +0000 (0:00:07.142) 0:00:09.238 ******* 2026-02-15 04:55:35.335113 | orchestrator | changed: [localhost] 2026-02-15 04:55:35.335125 | orchestrator | 2026-02-15 04:55:35.335135 | orchestrator | TASK [Create public network] *************************************************** 2026-02-15 04:55:35.335146 | orchestrator | Sunday 15 February 2026 04:55:10 +0000 (0:00:06.709) 0:00:15.948 ******* 2026-02-15 04:55:35.335157 | orchestrator | changed: [localhost] 2026-02-15 04:55:35.335168 | orchestrator | 2026-02-15 04:55:35.335179 | orchestrator | TASK [Set public network to default] ******************************************* 2026-02-15 04:55:35.335189 | orchestrator | Sunday 15 February 2026 04:55:16 +0000 (0:00:05.626) 0:00:21.575 ******* 2026-02-15 04:55:35.335205 | orchestrator | changed: [localhost] 2026-02-15 04:55:35.335216 | orchestrator | 2026-02-15 04:55:35.335227 | orchestrator | TASK [Create public subnet] **************************************************** 2026-02-15 04:55:35.335237 | orchestrator | Sunday 15 February 2026 04:55:22 +0000 (0:00:06.375) 0:00:27.951 ******* 2026-02-15 04:55:35.335248 | orchestrator | changed: [localhost] 2026-02-15 04:55:35.335259 | orchestrator | 2026-02-15 04:55:35.335270 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-02-15 04:55:35.335280 | orchestrator | Sunday 15 February 2026 04:55:27 +0000 (0:00:04.535) 0:00:32.486 ******* 2026-02-15 04:55:35.335291 | orchestrator | changed: [localhost] 2026-02-15 04:55:35.335349 | orchestrator | 2026-02-15 04:55:35.335363 | orchestrator | TASK [Create manager role] ***************************************************** 2026-02-15 04:55:35.335383 | orchestrator | Sunday 15 February 2026 04:55:31 +0000 (0:00:03.948) 0:00:36.435 ******* 2026-02-15 04:55:35.335394 | orchestrator | ok: [localhost] 2026-02-15 04:55:35.335407 | orchestrator | 2026-02-15 04:55:35.335419 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:55:35.335432 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 04:55:35.335445 | orchestrator | 2026-02-15 04:55:35.335456 | orchestrator | 2026-02-15 04:55:35.335469 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:55:35.335482 | orchestrator | Sunday 15 February 2026 04:55:35 +0000 (0:00:03.784) 0:00:40.219 ******* 2026-02-15 04:55:35.335494 | orchestrator | =============================================================================== 2026-02-15 04:55:35.335506 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.14s 2026-02-15 04:55:35.335518 | orchestrator | Create volume type LUKS ------------------------------------------------- 6.71s 2026-02-15 04:55:35.335530 | orchestrator | Set public network to default ------------------------------------------- 6.38s 2026-02-15 04:55:35.335543 | orchestrator | Create public network --------------------------------------------------- 5.63s 2026-02-15 04:55:35.335574 | orchestrator | Create public subnet ---------------------------------------------------- 4.54s 2026-02-15 04:55:35.335587 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.95s 2026-02-15 04:55:35.335599 | orchestrator | Create manager role ----------------------------------------------------- 3.78s 2026-02-15 04:55:35.335612 | orchestrator | Gathering Facts --------------------------------------------------------- 2.00s 2026-02-15 04:55:37.805059 | orchestrator | 2026-02-15 04:55:37 | INFO  | It takes a moment until task 2edb7203-2ba9-42be-adf0-f58192bde0ba (image-manager) has been started and output is visible here. 2026-02-15 04:56:20.059987 | orchestrator | 2026-02-15 04:55:40 | INFO  | Processing image 'Cirros 0.6.2' 2026-02-15 04:56:20.060133 | orchestrator | 2026-02-15 04:55:40 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-02-15 04:56:20.060161 | orchestrator | 2026-02-15 04:55:40 | INFO  | Importing image Cirros 0.6.2 2026-02-15 04:56:20.060175 | orchestrator | 2026-02-15 04:55:40 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-15 04:56:20.060187 | orchestrator | 2026-02-15 04:55:42 | INFO  | Waiting for image to leave queued state... 2026-02-15 04:56:20.060199 | orchestrator | 2026-02-15 04:55:44 | INFO  | Waiting for import to complete... 2026-02-15 04:56:20.060210 | orchestrator | 2026-02-15 04:55:55 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-02-15 04:56:20.060221 | orchestrator | 2026-02-15 04:55:55 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-02-15 04:56:20.060232 | orchestrator | 2026-02-15 04:55:55 | INFO  | Setting internal_version = 0.6.2 2026-02-15 04:56:20.060244 | orchestrator | 2026-02-15 04:55:55 | INFO  | Setting image_original_user = cirros 2026-02-15 04:56:20.060255 | orchestrator | 2026-02-15 04:55:55 | INFO  | Adding tag os:cirros 2026-02-15 04:56:20.060266 | orchestrator | 2026-02-15 04:55:55 | INFO  | Setting property architecture: x86_64 2026-02-15 04:56:20.060276 | orchestrator | 2026-02-15 04:55:56 | INFO  | Setting property hw_disk_bus: scsi 2026-02-15 04:56:20.060287 | orchestrator | 2026-02-15 04:55:56 | INFO  | Setting property hw_rng_model: virtio 2026-02-15 04:56:20.060298 | orchestrator | 2026-02-15 04:55:56 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-15 04:56:20.060309 | orchestrator | 2026-02-15 04:55:56 | INFO  | Setting property hw_watchdog_action: reset 2026-02-15 04:56:20.060320 | orchestrator | 2026-02-15 04:55:57 | INFO  | Setting property hypervisor_type: qemu 2026-02-15 04:56:20.060376 | orchestrator | 2026-02-15 04:55:57 | INFO  | Setting property os_distro: cirros 2026-02-15 04:56:20.060397 | orchestrator | 2026-02-15 04:55:57 | INFO  | Setting property os_purpose: minimal 2026-02-15 04:56:20.060415 | orchestrator | 2026-02-15 04:55:57 | INFO  | Setting property replace_frequency: never 2026-02-15 04:56:20.060433 | orchestrator | 2026-02-15 04:55:58 | INFO  | Setting property uuid_validity: none 2026-02-15 04:56:20.060451 | orchestrator | 2026-02-15 04:55:58 | INFO  | Setting property provided_until: none 2026-02-15 04:56:20.060468 | orchestrator | 2026-02-15 04:55:58 | INFO  | Setting property image_description: Cirros 2026-02-15 04:56:20.060488 | orchestrator | 2026-02-15 04:55:58 | INFO  | Setting property image_name: Cirros 2026-02-15 04:56:20.060508 | orchestrator | 2026-02-15 04:55:59 | INFO  | Setting property internal_version: 0.6.2 2026-02-15 04:56:20.060528 | orchestrator | 2026-02-15 04:55:59 | INFO  | Setting property image_original_user: cirros 2026-02-15 04:56:20.060580 | orchestrator | 2026-02-15 04:55:59 | INFO  | Setting property os_version: 0.6.2 2026-02-15 04:56:20.060612 | orchestrator | 2026-02-15 04:56:00 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-02-15 04:56:20.060635 | orchestrator | 2026-02-15 04:56:00 | INFO  | Setting property image_build_date: 2023-05-30 2026-02-15 04:56:20.060655 | orchestrator | 2026-02-15 04:56:00 | INFO  | Checking status of 'Cirros 0.6.2' 2026-02-15 04:56:20.060675 | orchestrator | 2026-02-15 04:56:00 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-02-15 04:56:20.060693 | orchestrator | 2026-02-15 04:56:00 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-02-15 04:56:20.060712 | orchestrator | 2026-02-15 04:56:00 | INFO  | Processing image 'Cirros 0.6.3' 2026-02-15 04:56:20.060736 | orchestrator | 2026-02-15 04:56:01 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-02-15 04:56:20.060755 | orchestrator | 2026-02-15 04:56:01 | INFO  | Importing image Cirros 0.6.3 2026-02-15 04:56:20.060775 | orchestrator | 2026-02-15 04:56:01 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-15 04:56:20.060794 | orchestrator | 2026-02-15 04:56:01 | INFO  | Waiting for image to leave queued state... 2026-02-15 04:56:20.060812 | orchestrator | 2026-02-15 04:56:03 | INFO  | Waiting for import to complete... 2026-02-15 04:56:20.060855 | orchestrator | 2026-02-15 04:56:13 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-02-15 04:56:20.060876 | orchestrator | 2026-02-15 04:56:14 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-02-15 04:56:20.060895 | orchestrator | 2026-02-15 04:56:14 | INFO  | Setting internal_version = 0.6.3 2026-02-15 04:56:20.060913 | orchestrator | 2026-02-15 04:56:14 | INFO  | Setting image_original_user = cirros 2026-02-15 04:56:20.060932 | orchestrator | 2026-02-15 04:56:14 | INFO  | Adding tag os:cirros 2026-02-15 04:56:20.060951 | orchestrator | 2026-02-15 04:56:14 | INFO  | Setting property architecture: x86_64 2026-02-15 04:56:20.060970 | orchestrator | 2026-02-15 04:56:14 | INFO  | Setting property hw_disk_bus: scsi 2026-02-15 04:56:20.060989 | orchestrator | 2026-02-15 04:56:14 | INFO  | Setting property hw_rng_model: virtio 2026-02-15 04:56:20.061008 | orchestrator | 2026-02-15 04:56:15 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-15 04:56:20.061026 | orchestrator | 2026-02-15 04:56:15 | INFO  | Setting property hw_watchdog_action: reset 2026-02-15 04:56:20.061046 | orchestrator | 2026-02-15 04:56:15 | INFO  | Setting property hypervisor_type: qemu 2026-02-15 04:56:20.061064 | orchestrator | 2026-02-15 04:56:15 | INFO  | Setting property os_distro: cirros 2026-02-15 04:56:20.061082 | orchestrator | 2026-02-15 04:56:16 | INFO  | Setting property os_purpose: minimal 2026-02-15 04:56:20.061093 | orchestrator | 2026-02-15 04:56:16 | INFO  | Setting property replace_frequency: never 2026-02-15 04:56:20.061104 | orchestrator | 2026-02-15 04:56:16 | INFO  | Setting property uuid_validity: none 2026-02-15 04:56:20.061115 | orchestrator | 2026-02-15 04:56:16 | INFO  | Setting property provided_until: none 2026-02-15 04:56:20.061125 | orchestrator | 2026-02-15 04:56:17 | INFO  | Setting property image_description: Cirros 2026-02-15 04:56:20.061136 | orchestrator | 2026-02-15 04:56:17 | INFO  | Setting property image_name: Cirros 2026-02-15 04:56:20.061147 | orchestrator | 2026-02-15 04:56:17 | INFO  | Setting property internal_version: 0.6.3 2026-02-15 04:56:20.061170 | orchestrator | 2026-02-15 04:56:18 | INFO  | Setting property image_original_user: cirros 2026-02-15 04:56:20.061181 | orchestrator | 2026-02-15 04:56:18 | INFO  | Setting property os_version: 0.6.3 2026-02-15 04:56:20.061191 | orchestrator | 2026-02-15 04:56:18 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-02-15 04:56:20.061202 | orchestrator | 2026-02-15 04:56:18 | INFO  | Setting property image_build_date: 2024-09-26 2026-02-15 04:56:20.061213 | orchestrator | 2026-02-15 04:56:19 | INFO  | Checking status of 'Cirros 0.6.3' 2026-02-15 04:56:20.061229 | orchestrator | 2026-02-15 04:56:19 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-02-15 04:56:20.061247 | orchestrator | 2026-02-15 04:56:19 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-02-15 04:56:20.360979 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-02-15 04:56:23.731622 | orchestrator | 2026-02-15 04:56:23 | INFO  | date: 2026-02-15 2026-02-15 04:56:23.731745 | orchestrator | 2026-02-15 04:56:23 | INFO  | image: octavia-amphora-haproxy-2024.2.20260215.qcow2 2026-02-15 04:56:23.731806 | orchestrator | 2026-02-15 04:56:23 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260215.qcow2 2026-02-15 04:56:23.731824 | orchestrator | 2026-02-15 04:56:23 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260215.qcow2.CHECKSUM 2026-02-15 04:56:24.046897 | orchestrator | 2026-02-15 04:56:24 | INFO  | checksum: b13ce3e6cd45ed4e908163ea4e08ebf6896679095b34eca4c04cdd2e0e5be8bd 2026-02-15 04:56:24.132858 | orchestrator | 2026-02-15 04:56:24 | INFO  | It takes a moment until task ce591268-a607-492a-8a6c-f0ff7c8f6494 (image-manager) has been started and output is visible here. 2026-02-15 04:57:57.459559 | orchestrator | 2026-02-15 04:56:26 | INFO  | Processing image 'OpenStack Octavia Amphora 2026-02-15' 2026-02-15 04:57:57.459690 | orchestrator | 2026-02-15 04:56:26 | INFO  | Tested URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260215.qcow2: 200 2026-02-15 04:57:57.459708 | orchestrator | 2026-02-15 04:56:26 | INFO  | Importing image OpenStack Octavia Amphora 2026-02-15 2026-02-15 04:57:57.460509 | orchestrator | 2026-02-15 04:56:26 | INFO  | Importing from URL https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260215.qcow2 2026-02-15 04:57:57.460534 | orchestrator | 2026-02-15 04:56:28 | INFO  | Waiting for image to leave queued state... 2026-02-15 04:57:57.460547 | orchestrator | 2026-02-15 04:56:30 | INFO  | Waiting for import to complete... 2026-02-15 04:57:57.460560 | orchestrator | 2026-02-15 04:56:40 | INFO  | Waiting for import to complete... 2026-02-15 04:57:57.460571 | orchestrator | 2026-02-15 04:56:50 | INFO  | Waiting for import to complete... 2026-02-15 04:57:57.460582 | orchestrator | 2026-02-15 04:57:00 | INFO  | Waiting for import to complete... 2026-02-15 04:57:57.460596 | orchestrator | 2026-02-15 04:57:10 | INFO  | Waiting for import to complete... 2026-02-15 04:57:57.460607 | orchestrator | 2026-02-15 04:57:20 | INFO  | Waiting for import to complete... 2026-02-15 04:57:57.460618 | orchestrator | 2026-02-15 04:57:30 | INFO  | Waiting for import to complete... 2026-02-15 04:57:57.460629 | orchestrator | 2026-02-15 04:57:40 | INFO  | Waiting for import to complete... 2026-02-15 04:57:57.460640 | orchestrator | 2026-02-15 04:57:51 | INFO  | Import of 'OpenStack Octavia Amphora 2026-02-15' successfully completed, reloading images 2026-02-15 04:57:57.460676 | orchestrator | 2026-02-15 04:57:51 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2026-02-15' 2026-02-15 04:57:57.460688 | orchestrator | 2026-02-15 04:57:51 | INFO  | Setting internal_version = 2026-02-15 2026-02-15 04:57:57.460698 | orchestrator | 2026-02-15 04:57:51 | INFO  | Setting image_original_user = ubuntu 2026-02-15 04:57:57.460710 | orchestrator | 2026-02-15 04:57:51 | INFO  | Adding tag amphora 2026-02-15 04:57:57.460721 | orchestrator | 2026-02-15 04:57:51 | INFO  | Adding tag os:ubuntu 2026-02-15 04:57:57.460731 | orchestrator | 2026-02-15 04:57:52 | INFO  | Setting property architecture: x86_64 2026-02-15 04:57:57.460742 | orchestrator | 2026-02-15 04:57:52 | INFO  | Setting property hw_disk_bus: scsi 2026-02-15 04:57:57.460752 | orchestrator | 2026-02-15 04:57:52 | INFO  | Setting property hw_rng_model: virtio 2026-02-15 04:57:57.460763 | orchestrator | 2026-02-15 04:57:52 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-02-15 04:57:57.460774 | orchestrator | 2026-02-15 04:57:53 | INFO  | Setting property hw_watchdog_action: reset 2026-02-15 04:57:57.460784 | orchestrator | 2026-02-15 04:57:53 | INFO  | Setting property hypervisor_type: qemu 2026-02-15 04:57:57.460795 | orchestrator | 2026-02-15 04:57:53 | INFO  | Setting property os_distro: ubuntu 2026-02-15 04:57:57.460805 | orchestrator | 2026-02-15 04:57:53 | INFO  | Setting property replace_frequency: quarterly 2026-02-15 04:57:57.460816 | orchestrator | 2026-02-15 04:57:54 | INFO  | Setting property uuid_validity: last-1 2026-02-15 04:57:57.460826 | orchestrator | 2026-02-15 04:57:54 | INFO  | Setting property provided_until: none 2026-02-15 04:57:57.460853 | orchestrator | 2026-02-15 04:57:54 | INFO  | Setting property os_purpose: network 2026-02-15 04:57:57.460864 | orchestrator | 2026-02-15 04:57:55 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2026-02-15 04:57:57.460875 | orchestrator | 2026-02-15 04:57:55 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2026-02-15 04:57:57.460886 | orchestrator | 2026-02-15 04:57:55 | INFO  | Setting property internal_version: 2026-02-15 2026-02-15 04:57:57.460897 | orchestrator | 2026-02-15 04:57:55 | INFO  | Setting property image_original_user: ubuntu 2026-02-15 04:57:57.460907 | orchestrator | 2026-02-15 04:57:56 | INFO  | Setting property os_version: 2026-02-15 2026-02-15 04:57:57.460918 | orchestrator | 2026-02-15 04:57:56 | INFO  | Setting property image_source: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260215.qcow2 2026-02-15 04:57:57.460929 | orchestrator | 2026-02-15 04:57:56 | INFO  | Setting property image_build_date: 2026-02-15 2026-02-15 04:57:57.460957 | orchestrator | 2026-02-15 04:57:56 | INFO  | Checking status of 'OpenStack Octavia Amphora 2026-02-15' 2026-02-15 04:57:57.460969 | orchestrator | 2026-02-15 04:57:56 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2026-02-15' 2026-02-15 04:57:57.460980 | orchestrator | 2026-02-15 04:57:57 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2026-02-15 04:57:57.460991 | orchestrator | 2026-02-15 04:57:57 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2026-02-15 04:57:57.461003 | orchestrator | 2026-02-15 04:57:57 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2026-02-15 04:57:57.461014 | orchestrator | 2026-02-15 04:57:57 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2026-02-15 04:57:57.981558 | orchestrator | ok: Runtime: 0:03:30.686690 2026-02-15 04:57:58.000104 | 2026-02-15 04:57:58.000246 | TASK [Run checks] 2026-02-15 04:57:58.762967 | orchestrator | + set -e 2026-02-15 04:57:58.763160 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 04:57:58.763183 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 04:57:58.763205 | orchestrator | ++ INTERACTIVE=false 2026-02-15 04:57:58.763219 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 04:57:58.763232 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 04:57:58.763245 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-15 04:57:58.764084 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-15 04:57:58.770680 | orchestrator | 2026-02-15 04:57:58.770764 | orchestrator | # CHECK 2026-02-15 04:57:58.770790 | orchestrator | 2026-02-15 04:57:58.770812 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 04:57:58.770839 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 04:57:58.770853 | orchestrator | + echo 2026-02-15 04:57:58.770868 | orchestrator | + echo '# CHECK' 2026-02-15 04:57:58.770886 | orchestrator | + echo 2026-02-15 04:57:58.770911 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-15 04:57:58.771459 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-15 04:57:58.843750 | orchestrator | 2026-02-15 04:57:58.843885 | orchestrator | ## Containers @ testbed-manager 2026-02-15 04:57:58.843912 | orchestrator | 2026-02-15 04:57:58.843933 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-15 04:57:58.843953 | orchestrator | + echo 2026-02-15 04:57:58.843971 | orchestrator | + echo '## Containers @ testbed-manager' 2026-02-15 04:57:58.843990 | orchestrator | + echo 2026-02-15 04:57:58.844009 | orchestrator | + osism container testbed-manager ps 2026-02-15 04:58:00.811899 | orchestrator | 2026-02-15 04:58:00 | INFO  | Creating empty known_hosts file: /share/known_hosts 2026-02-15 04:58:01.187539 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-15 04:58:01.187726 | orchestrator | 5fb563b40063 registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_blackbox_exporter 2026-02-15 04:58:01.187757 | orchestrator | 64a0484ec83b registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_alertmanager 2026-02-15 04:58:01.187778 | orchestrator | 96c410c2c059 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-15 04:58:01.187792 | orchestrator | 44a29b7fe742 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-15 04:58:01.187806 | orchestrator | 60e04e429b81 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_server 2026-02-15 04:58:01.187825 | orchestrator | b1f15c2ef3ed registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" About an hour ago Up 59 minutes cephclient 2026-02-15 04:58:01.187839 | orchestrator | ace78f3e29b6 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-15 04:58:01.187852 | orchestrator | 49ed01590d53 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-15 04:58:01.187892 | orchestrator | e04223c479f7 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-15 04:58:01.187905 | orchestrator | c89234ed6284 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 2 hours ago Up 2 hours openstackclient 2026-02-15 04:58:01.187918 | orchestrator | 101630bfcd90 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp phpmyadmin 2026-02-15 04:58:01.187943 | orchestrator | 144bc11de491 registry.osism.tech/osism/homer:v25.10.1 "/bin/sh /entrypoint…" 2 hours ago Up 2 hours (healthy) 8080/tcp homer 2026-02-15 04:58:01.187956 | orchestrator | ed5744a08181 registry.osism.tech/osism/cgit:1.2.3 "httpd-foreground" 2 hours ago Up 2 hours 80/tcp cgit 2026-02-15 04:58:01.187969 | orchestrator | 410661289ec3 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:3128->3128/tcp squid 2026-02-15 04:58:01.188009 | orchestrator | aa642ed75bde registry.osism.tech/osism/inventory-reconciler:0.20251130.0 "/sbin/tini -- /entr…" 2 hours ago Up 2 hours (healthy) manager-inventory_reconciler-1 2026-02-15 04:58:01.188024 | orchestrator | 06d9890a0908 registry.osism.tech/osism/osism-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-ansible 2026-02-15 04:58:01.188037 | orchestrator | 9709de9c28a9 registry.osism.tech/osism/ceph-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) ceph-ansible 2026-02-15 04:58:01.188051 | orchestrator | b57f091c1619 registry.osism.tech/osism/kolla-ansible:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) kolla-ansible 2026-02-15 04:58:01.188516 | orchestrator | eb4cbba144e4 registry.osism.tech/osism/osism-kubernetes:0.20251130.0 "/entrypoint.sh osis…" 2 hours ago Up 2 hours (healthy) osism-kubernetes 2026-02-15 04:58:01.188542 | orchestrator | 91dac3e2f320 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" 2 hours ago Up 2 hours (healthy) 8000/tcp manager-ara-server-1 2026-02-15 04:58:01.188557 | orchestrator | e62bbdfeaf3f registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- sleep…" 2 hours ago Up 2 hours (healthy) osismclient 2026-02-15 04:58:01.188571 | orchestrator | 2cc3554b9704 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2026-02-15 04:58:01.188599 | orchestrator | b7e973c19aba registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-beat-1 2026-02-15 04:58:01.188612 | orchestrator | a505cc5efce7 registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-listener-1 2026-02-15 04:58:01.188624 | orchestrator | 5b47a1a4650f registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 3306/tcp manager-mariadb-1 2026-02-15 04:58:01.188636 | orchestrator | 50bc61e77049 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours (healthy) 6379/tcp manager-redis-1 2026-02-15 04:58:01.188651 | orchestrator | 8821c99eb74f registry.osism.tech/osism/osism-frontend:0.20251130.1 "docker-entrypoint.s…" 2 hours ago Up 2 hours 192.168.16.5:3000->3000/tcp osism-frontend 2026-02-15 04:58:01.188664 | orchestrator | bda9415d057d registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-flower-1 2026-02-15 04:58:01.188686 | orchestrator | 39b5132e403a registry.osism.tech/osism/osism:0.20251130.1 "/sbin/tini -- osism…" 2 hours ago Up 2 hours (healthy) manager-openstack-1 2026-02-15 04:58:01.188701 | orchestrator | dab5311e234f registry.osism.tech/dockerhub/library/traefik:v3.5.0 "/entrypoint.sh trae…" 2 hours ago Up 2 hours (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2026-02-15 04:58:01.523724 | orchestrator | 2026-02-15 04:58:01.523842 | orchestrator | ## Images @ testbed-manager 2026-02-15 04:58:01.523861 | orchestrator | 2026-02-15 04:58:01.523873 | orchestrator | + echo 2026-02-15 04:58:01.523885 | orchestrator | + echo '## Images @ testbed-manager' 2026-02-15 04:58:01.523897 | orchestrator | + echo 2026-02-15 04:58:01.523912 | orchestrator | + osism container testbed-manager images 2026-02-15 04:58:03.929169 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-15 04:58:03.929274 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 7b891f839cbb 25 hours ago 239MB 2026-02-15 04:58:03.929287 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.7-alpine e08bd8d5a677 2 weeks ago 41.4MB 2026-02-15 04:58:03.929297 | orchestrator | registry.osism.tech/osism/homer v25.10.1 ea34b371c716 2 months ago 11.5MB 2026-02-15 04:58:03.929306 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20251130.0 0f140ec71e5f 2 months ago 608MB 2026-02-15 04:58:03.929318 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-15 04:58:03.929327 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-15 04:58:03.929336 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-15 04:58:03.929345 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20251130 7bbb4f6f4831 2 months ago 308MB 2026-02-15 04:58:03.929354 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-15 04:58:03.929386 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20251130 ba994ea4acda 2 months ago 404MB 2026-02-15 04:58:03.929396 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20251130 56b43d5c716a 2 months ago 839MB 2026-02-15 04:58:03.929440 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-15 04:58:03.929450 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20251130.0 1bfc1dadeee1 2 months ago 330MB 2026-02-15 04:58:03.929459 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20251130.0 42988b2d229c 2 months ago 613MB 2026-02-15 04:58:03.929468 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20251130.0 a212d8ca4a50 2 months ago 560MB 2026-02-15 04:58:03.929477 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20251130.0 9beff03cb77b 2 months ago 1.23GB 2026-02-15 04:58:03.929486 | orchestrator | registry.osism.tech/osism/osism 0.20251130.1 95213af683ec 2 months ago 383MB 2026-02-15 04:58:03.929494 | orchestrator | registry.osism.tech/osism/osism-frontend 0.20251130.1 2cb6e7609620 2 months ago 238MB 2026-02-15 04:58:03.929503 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.4 70745dd8f1d0 3 months ago 334MB 2026-02-15 04:58:03.929511 | orchestrator | phpmyadmin/phpmyadmin 5.2 e66b1f5a8c58 4 months ago 742MB 2026-02-15 04:58:03.929520 | orchestrator | registry.osism.tech/osism/ara-server 1.7.3 d1b687333f2f 5 months ago 275MB 2026-02-15 04:58:03.929529 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.5.0 11cc59587f6a 6 months ago 226MB 2026-02-15 04:58:03.929538 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 9 months ago 453MB 2026-02-15 04:58:03.929546 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 20 months ago 146MB 2026-02-15 04:58:03.929555 | orchestrator | registry.osism.tech/osism/cgit 1.2.3 16e7285642b1 2 years ago 545MB 2026-02-15 04:58:04.249452 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-15 04:58:04.249572 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-15 04:58:04.314912 | orchestrator | 2026-02-15 04:58:04.315042 | orchestrator | ## Containers @ testbed-node-0 2026-02-15 04:58:04.315069 | orchestrator | 2026-02-15 04:58:04.315086 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-15 04:58:04.315104 | orchestrator | + echo 2026-02-15 04:58:04.315120 | orchestrator | + echo '## Containers @ testbed-node-0' 2026-02-15 04:58:04.315138 | orchestrator | + echo 2026-02-15 04:58:04.315155 | orchestrator | + osism container testbed-node-0 ps 2026-02-15 04:58:06.847969 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-15 04:58:06.848096 | orchestrator | c2f8068b1ae5 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-15 04:58:06.848117 | orchestrator | 61299c8d182d registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-15 04:58:06.848155 | orchestrator | f85f41ab3344 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2026-02-15 04:58:06.848175 | orchestrator | 93426d8a5029 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-15 04:58:06.848225 | orchestrator | 2f09519e2b20 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 9 minutes prometheus_cadvisor 2026-02-15 04:58:06.848245 | orchestrator | 1b8c3d864f4f registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-15 04:58:06.848273 | orchestrator | 6de7cb0a8c64 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-15 04:58:06.848292 | orchestrator | 4fcb2870a899 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-15 04:58:06.848309 | orchestrator | eb00a8eaf84e registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-15 04:58:06.848327 | orchestrator | 1aa5f1146a18 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-15 04:58:06.848344 | orchestrator | d9bb31c551be registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-15 04:58:06.848362 | orchestrator | e3a3c0a3beff registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-15 04:58:06.848379 | orchestrator | e16d22ea8716 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-15 04:58:06.848397 | orchestrator | 314895f49b69 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-15 04:58:06.848469 | orchestrator | d3bad23ddecf registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-15 04:58:06.848487 | orchestrator | 30df43816580 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-15 04:58:06.848515 | orchestrator | b5cf42c40651 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-15 04:58:06.848534 | orchestrator | 72db260ffae9 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-15 04:58:06.848551 | orchestrator | e7577c71b1d7 registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-15 04:58:06.848595 | orchestrator | 96345737b8db registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-15 04:58:06.848614 | orchestrator | 53348e518988 registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-15 04:58:06.848632 | orchestrator | 3c47219709cb registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-15 04:58:06.848663 | orchestrator | 4e4ab610ecf1 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-15 04:58:06.848681 | orchestrator | 666c7be6669a registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-15 04:58:06.848699 | orchestrator | 4bb07c61fa67 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-15 04:58:06.848723 | orchestrator | 4e3aa03cbd50 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-15 04:58:06.848741 | orchestrator | 6b61da5bf850 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-15 04:58:06.848758 | orchestrator | bb4b04df9c1e registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-15 04:58:06.848776 | orchestrator | e6a2e1d74407 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-15 04:58:06.848810 | orchestrator | d6848a334ccf registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-15 04:58:06.848827 | orchestrator | 2a5fbc72efa6 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-15 04:58:06.848846 | orchestrator | f7be2bed6952 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-15 04:58:06.848864 | orchestrator | f339a6b09cae registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-15 04:58:06.848882 | orchestrator | 79e422d2b652 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-15 04:58:06.848901 | orchestrator | e3c6496c1308 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_scheduler 2026-02-15 04:58:06.848920 | orchestrator | 4f0cacdb12cf registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-15 04:58:06.848938 | orchestrator | 6182ee3e55d6 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-15 04:58:06.848956 | orchestrator | be6d6c9114fe registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-02-15 04:58:06.848983 | orchestrator | 4f62595e7c56 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-15 04:58:06.849030 | orchestrator | 10b7e82053a1 registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-15 04:58:06.849051 | orchestrator | 45f5e6dfb154 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-15 04:58:06.849071 | orchestrator | 7d43b13d9d36 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_conductor 2026-02-15 04:58:06.849090 | orchestrator | dfae77146259 registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-15 04:58:06.849102 | orchestrator | 375cff1a26bb registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-15 04:58:06.849113 | orchestrator | ae4211428592 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-02-15 04:58:06.849124 | orchestrator | 57026902eb5d registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-02-15 04:58:06.849134 | orchestrator | 7175d0099d9c registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-15 04:58:06.849145 | orchestrator | aee44de18893 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-02-15 04:58:06.849156 | orchestrator | 22f1280493d7 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-02-15 04:58:06.849167 | orchestrator | fba1b82ffa89 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-0 2026-02-15 04:58:06.849177 | orchestrator | 4f6f5235de44 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-0 2026-02-15 04:58:06.849194 | orchestrator | e40f30e87190 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-0 2026-02-15 04:58:06.849205 | orchestrator | 1c4f37cac642 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-15 04:58:06.849216 | orchestrator | 4113d7ed708f registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-15 04:58:06.849227 | orchestrator | 418c7cca78fc registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-15 04:58:06.849238 | orchestrator | 9100d85da56e registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-15 04:58:06.849248 | orchestrator | 6dea3aad0992 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-15 04:58:06.849266 | orchestrator | b7afbd56b349 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-15 04:58:06.849277 | orchestrator | fb5b7d97e14e registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-15 04:58:06.849294 | orchestrator | 6279e5cd3e7d registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-15 04:58:06.849306 | orchestrator | 9a1fb2c77a85 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-15 04:58:06.849317 | orchestrator | ed51edfdf4b7 registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-15 04:58:06.849327 | orchestrator | 584e91f2b3ee registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-15 04:58:06.849338 | orchestrator | a4d0285064fc registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch_dashboards 2026-02-15 04:58:06.849349 | orchestrator | 623cf9755e5a registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-02-15 04:58:06.849359 | orchestrator | 685da9ae5b93 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-15 04:58:06.849370 | orchestrator | 3e91cfc93f98 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-15 04:58:06.849381 | orchestrator | 84361d8fe4b3 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-15 04:58:06.849392 | orchestrator | 623a45393969 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-15 04:58:06.849432 | orchestrator | 577d2a8ee1df registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-15 04:58:06.849452 | orchestrator | 82c032052b86 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-15 04:58:07.173082 | orchestrator | 2026-02-15 04:58:07.173186 | orchestrator | ## Images @ testbed-node-0 2026-02-15 04:58:07.173201 | orchestrator | 2026-02-15 04:58:07.173211 | orchestrator | + echo 2026-02-15 04:58:07.173222 | orchestrator | + echo '## Images @ testbed-node-0' 2026-02-15 04:58:07.173232 | orchestrator | + echo 2026-02-15 04:58:07.173242 | orchestrator | + osism container testbed-node-0 images 2026-02-15 04:58:09.635918 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-15 04:58:09.636048 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-15 04:58:09.636065 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-15 04:58:09.636077 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-15 04:58:09.636118 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-15 04:58:09.636146 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-15 04:58:09.636158 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-15 04:58:09.636170 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-15 04:58:09.636181 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-15 04:58:09.636192 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-15 04:58:09.636819 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-15 04:58:09.636918 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-15 04:58:09.636934 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-15 04:58:09.636946 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-15 04:58:09.636957 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-15 04:58:09.636969 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-15 04:58:09.636980 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-15 04:58:09.636991 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-15 04:58:09.637023 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-15 04:58:09.637035 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-15 04:58:09.637046 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-15 04:58:09.637059 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-15 04:58:09.637079 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-15 04:58:09.637092 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-15 04:58:09.637103 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-15 04:58:09.637114 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-15 04:58:09.637125 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-15 04:58:09.637136 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-15 04:58:09.637146 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-15 04:58:09.637157 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-15 04:58:09.637195 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-15 04:58:09.637215 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-15 04:58:09.637233 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-15 04:58:09.637307 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-15 04:58:09.637333 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-15 04:58:09.637353 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-15 04:58:09.637374 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-15 04:58:09.637555 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-15 04:58:09.637575 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-15 04:58:09.637587 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-15 04:58:09.637597 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-15 04:58:09.637608 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-15 04:58:09.637619 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-15 04:58:09.637630 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-15 04:58:09.637703 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-15 04:58:09.637722 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-15 04:58:09.637742 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-15 04:58:09.637755 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-15 04:58:09.637776 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-15 04:58:09.637787 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-15 04:58:09.637798 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-15 04:58:09.637809 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-15 04:58:09.637820 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-15 04:58:09.637831 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-15 04:58:09.637841 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-15 04:58:09.637950 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-15 04:58:09.637974 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-15 04:58:09.637999 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-15 04:58:09.638010 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-15 04:58:09.638079 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-15 04:58:09.638091 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-15 04:58:09.638102 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-15 04:58:09.638113 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-15 04:58:09.638124 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-15 04:58:09.638135 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-15 04:58:09.638146 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-15 04:58:09.638156 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-15 04:58:09.638167 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-15 04:58:09.640241 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-15 04:58:09.640277 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-15 04:58:09.961353 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-15 04:58:09.961920 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-15 04:58:10.020854 | orchestrator | 2026-02-15 04:58:10.020947 | orchestrator | ## Containers @ testbed-node-1 2026-02-15 04:58:10.020966 | orchestrator | 2026-02-15 04:58:10.020978 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-15 04:58:10.020989 | orchestrator | + echo 2026-02-15 04:58:10.021001 | orchestrator | + echo '## Containers @ testbed-node-1' 2026-02-15 04:58:10.021014 | orchestrator | + echo 2026-02-15 04:58:10.021025 | orchestrator | + osism container testbed-node-1 ps 2026-02-15 04:58:12.457478 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-15 04:58:12.457614 | orchestrator | 9bb1e715a4b6 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 3 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-15 04:58:12.457644 | orchestrator | 58ea39d7d5c9 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-15 04:58:12.457664 | orchestrator | ff85004fa1a8 registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 6 minutes ago Up 6 minutes grafana 2026-02-15 04:58:12.457681 | orchestrator | 66e76f83dd48 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-15 04:58:12.457707 | orchestrator | 86fa9078e562 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-15 04:58:12.457749 | orchestrator | 2ff93b588d21 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-15 04:58:12.457795 | orchestrator | 83f3ea68b1a8 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-15 04:58:12.457816 | orchestrator | e46cd53a0321 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-15 04:58:12.457840 | orchestrator | 55493e34363c registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-15 04:58:12.457853 | orchestrator | 4f2d11de3931 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_scheduler 2026-02-15 04:58:12.457863 | orchestrator | 1f33c0abd2af registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-15 04:58:12.457874 | orchestrator | 235c39f9725a registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-15 04:58:12.457885 | orchestrator | f63ca6bf2c1f registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-15 04:58:12.457896 | orchestrator | 98d04a168447 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-15 04:58:12.457907 | orchestrator | ad4d0da89e56 registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-15 04:58:12.457918 | orchestrator | 78718f7bf6f8 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-15 04:58:12.457928 | orchestrator | ca7ef5477a9e registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-15 04:58:12.457939 | orchestrator | dbcbad0b527e registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-15 04:58:12.457967 | orchestrator | 3fdc3b686c3f registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-15 04:58:12.458000 | orchestrator | 4c86a249b9a5 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_housekeeping 2026-02-15 04:58:12.458108 | orchestrator | 52d2c6c8b1ed registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-15 04:58:12.458136 | orchestrator | eab416ebfef6 registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-15 04:58:12.458155 | orchestrator | 90288115e9fa registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-15 04:58:12.458193 | orchestrator | f3c7bf784931 registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 26 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-15 04:58:12.458217 | orchestrator | f884d0f5d43e registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-15 04:58:12.458235 | orchestrator | 59cb2b733014 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-15 04:58:12.458255 | orchestrator | 127bf221c092 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-15 04:58:12.458273 | orchestrator | 4ec08ae90415 registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-15 04:58:12.458294 | orchestrator | 5e9f7e5b7067 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-15 04:58:12.458322 | orchestrator | 9cab0a48321e registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-15 04:58:12.458341 | orchestrator | 84f3e414dec9 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-15 04:58:12.458368 | orchestrator | 2787dab34f50 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-15 04:58:12.458390 | orchestrator | 4dc9e97562ea registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-15 04:58:12.458437 | orchestrator | 5b1054eff615 registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-15 04:58:12.458456 | orchestrator | c58b4e1c5903 registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-02-15 04:58:12.458474 | orchestrator | a1dccd8a2ea3 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-15 04:58:12.458494 | orchestrator | 6c181dc0e390 registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-15 04:58:12.458512 | orchestrator | 482e9a90a98b registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-02-15 04:58:12.458530 | orchestrator | 2ba7e4f09c71 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-15 04:58:12.458564 | orchestrator | d07606aa974b registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-15 04:58:12.458584 | orchestrator | a1881313640b registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-15 04:58:12.458615 | orchestrator | a320fa7d345c registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-02-15 04:58:12.458634 | orchestrator | 4f1b9dde38ff registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-15 04:58:12.458652 | orchestrator | eb5c6ba6fc87 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-15 04:58:12.458670 | orchestrator | 1e5dbe9c30a0 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-02-15 04:58:12.458688 | orchestrator | fa38a54c1697 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-02-15 04:58:12.458706 | orchestrator | 2d3beebcd4ea registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 55 minutes ago Up 55 minutes (healthy) keystone 2026-02-15 04:58:12.458724 | orchestrator | a69d95c4c8bf registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-02-15 04:58:12.458742 | orchestrator | 0f158a6bda6f registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-02-15 04:58:12.458760 | orchestrator | 5324c44b366c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-1 2026-02-15 04:58:12.458778 | orchestrator | d9811d3ae4a1 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-1 2026-02-15 04:58:12.458805 | orchestrator | 3aeb4857506c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-1 2026-02-15 04:58:12.458826 | orchestrator | 0b511e99b7e7 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-15 04:58:12.458844 | orchestrator | c541da89b801 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-15 04:58:12.458868 | orchestrator | 0070eef91db2 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-15 04:58:12.458891 | orchestrator | aa526632989c registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-15 04:58:12.458919 | orchestrator | f743f7bb1877 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-15 04:58:12.458937 | orchestrator | e37eb5dc0f40 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-15 04:58:12.458957 | orchestrator | e52200e9a565 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-15 04:58:12.458998 | orchestrator | 101f65112fb5 registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-15 04:58:12.459018 | orchestrator | ab30a22cb411 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-15 04:58:12.459047 | orchestrator | 948f5db6f2ac registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-15 04:58:12.459067 | orchestrator | c8d775efff87 registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-15 04:58:12.459084 | orchestrator | 808e24464c6e registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-15 04:58:12.459102 | orchestrator | 999c78ed8350 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-02-15 04:58:12.459129 | orchestrator | a55ed6188d4a registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-15 04:58:12.459152 | orchestrator | cadda199c8b3 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-15 04:58:12.459170 | orchestrator | 90d3a4b161d6 registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-15 04:58:12.459189 | orchestrator | 2f7651884033 registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-15 04:58:12.459208 | orchestrator | fde325f33e5b registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-15 04:58:12.459226 | orchestrator | 586434b76f1d registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-15 04:58:12.772970 | orchestrator | 2026-02-15 04:58:12.773071 | orchestrator | ## Images @ testbed-node-1 2026-02-15 04:58:12.773096 | orchestrator | 2026-02-15 04:58:12.773109 | orchestrator | + echo 2026-02-15 04:58:12.773121 | orchestrator | + echo '## Images @ testbed-node-1' 2026-02-15 04:58:12.773133 | orchestrator | + echo 2026-02-15 04:58:12.773145 | orchestrator | + osism container testbed-node-1 images 2026-02-15 04:58:15.287969 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-15 04:58:15.288101 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-15 04:58:15.288126 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-15 04:58:15.288146 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-15 04:58:15.288167 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-15 04:58:15.288186 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-15 04:58:15.288235 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-15 04:58:15.288255 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-15 04:58:15.288273 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-15 04:58:15.288292 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-15 04:58:15.288310 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-15 04:58:15.288327 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-15 04:58:15.288346 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-15 04:58:15.288365 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-15 04:58:15.288384 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-15 04:58:15.288403 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-15 04:58:15.288455 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-15 04:58:15.288473 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-15 04:58:15.288492 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-15 04:58:15.288515 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-15 04:58:15.288545 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-15 04:58:15.288572 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-15 04:58:15.288600 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-15 04:58:15.288626 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-15 04:58:15.288678 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-15 04:58:15.288705 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-15 04:58:15.288725 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-15 04:58:15.288743 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-15 04:58:15.288769 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-15 04:58:15.288798 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-15 04:58:15.288827 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-15 04:58:15.288854 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-15 04:58:15.288903 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-15 04:58:15.288946 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-15 04:58:15.288965 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-15 04:58:15.288983 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-15 04:58:15.289002 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-15 04:58:15.289020 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-15 04:58:15.289038 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-15 04:58:15.289058 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-15 04:58:15.289076 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-15 04:58:15.289093 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-15 04:58:15.289111 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-15 04:58:15.289128 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-15 04:58:15.289147 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-15 04:58:15.289164 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-15 04:58:15.289182 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-15 04:58:15.289201 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-15 04:58:15.289220 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-15 04:58:15.289239 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-15 04:58:15.289258 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-15 04:58:15.289277 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-15 04:58:15.289296 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-15 04:58:15.289315 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-15 04:58:15.289333 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-15 04:58:15.289353 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-15 04:58:15.289372 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-15 04:58:15.289390 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-15 04:58:15.289434 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-15 04:58:15.289465 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-15 04:58:15.289483 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-15 04:58:15.289501 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-15 04:58:15.289519 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-15 04:58:15.289538 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-15 04:58:15.289569 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-15 04:58:15.289588 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-15 04:58:15.289608 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-15 04:58:15.289627 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-15 04:58:15.289645 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-15 04:58:15.289665 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-15 04:58:15.619677 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2026-02-15 04:58:15.619919 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-15 04:58:15.677779 | orchestrator | 2026-02-15 04:58:15.677879 | orchestrator | ## Containers @ testbed-node-2 2026-02-15 04:58:15.677895 | orchestrator | 2026-02-15 04:58:15.677907 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-15 04:58:15.677919 | orchestrator | + echo 2026-02-15 04:58:15.677931 | orchestrator | + echo '## Containers @ testbed-node-2' 2026-02-15 04:58:15.677943 | orchestrator | + echo 2026-02-15 04:58:15.677954 | orchestrator | + osism container testbed-node-2 ps 2026-02-15 04:58:18.173134 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2026-02-15 04:58:18.173237 | orchestrator | 20057a23a9a3 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 3 minutes (healthy) magnum_conductor 2026-02-15 04:58:18.173255 | orchestrator | 6e075030fbb4 registry.osism.tech/kolla/release/magnum-api:19.0.1.20251130 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) magnum_api 2026-02-15 04:58:18.173268 | orchestrator | 1f74c9fc92fd registry.osism.tech/kolla/release/grafana:12.3.0.20251130 "dumb-init --single-…" 7 minutes ago Up 6 minutes grafana 2026-02-15 04:58:18.173301 | orchestrator | df5f2b1258d4 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20251130 "dumb-init --single-…" 9 minutes ago Up 9 minutes prometheus_elasticsearch_exporter 2026-02-15 04:58:18.173316 | orchestrator | 2a7f16d1d755 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_cadvisor 2026-02-15 04:58:18.173327 | orchestrator | aee7059bfef5 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_memcached_exporter 2026-02-15 04:58:18.173340 | orchestrator | 228b2efa12ae registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_mysqld_exporter 2026-02-15 04:58:18.173374 | orchestrator | 3f4893847981 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130 "dumb-init --single-…" 10 minutes ago Up 10 minutes prometheus_node_exporter 2026-02-15 04:58:18.173386 | orchestrator | 6ab038d63b23 registry.osism.tech/kolla/release/manila-share:19.1.1.20251130 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) manila_share 2026-02-15 04:58:18.173398 | orchestrator | 5530d6bedf16 registry.osism.tech/kolla/release/manila-scheduler:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_scheduler 2026-02-15 04:58:18.173445 | orchestrator | 7fb73f7a1159 registry.osism.tech/kolla/release/manila-data:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_data 2026-02-15 04:58:18.173457 | orchestrator | e1ee8f016792 registry.osism.tech/kolla/release/manila-api:19.1.1.20251130 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) manila_api 2026-02-15 04:58:18.173474 | orchestrator | 640bd877edc2 registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_notifier 2026-02-15 04:58:18.173485 | orchestrator | a1a215fb3d52 registry.osism.tech/kolla/release/aodh-listener:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_listener 2026-02-15 04:58:18.173496 | orchestrator | e2b76efd038f registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_evaluator 2026-02-15 04:58:18.173507 | orchestrator | 0b760bb36186 registry.osism.tech/kolla/release/aodh-api:19.0.0.20251130 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) aodh_api 2026-02-15 04:58:18.173518 | orchestrator | 3ff8e3fb84a0 registry.osism.tech/kolla/release/ceilometer-central:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes ceilometer_central 2026-02-15 04:58:18.173528 | orchestrator | 31f64591ed09 registry.osism.tech/kolla/release/ceilometer-notification:23.0.2.20251130 "dumb-init --single-…" 20 minutes ago Up 20 minutes (healthy) ceilometer_notification 2026-02-15 04:58:18.173539 | orchestrator | de94e416358d registry.osism.tech/kolla/release/octavia-worker:15.0.2.20251130 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) octavia_worker 2026-02-15 04:58:18.173568 | orchestrator | 17c64e78256a registry.osism.tech/kolla/release/octavia-housekeeping:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_housekeeping 2026-02-15 04:58:18.173580 | orchestrator | ef11cc12dacd registry.osism.tech/kolla/release/octavia-health-manager:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_health_manager 2026-02-15 04:58:18.173591 | orchestrator | 958a5bf989ee registry.osism.tech/kolla/release/octavia-driver-agent:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes octavia_driver_agent 2026-02-15 04:58:18.173602 | orchestrator | cb632fc34148 registry.osism.tech/kolla/release/octavia-api:15.0.2.20251130 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) octavia_api 2026-02-15 04:58:18.173612 | orchestrator | 1726fd83d17f registry.osism.tech/kolla/release/designate-worker:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 26 minutes (healthy) designate_worker 2026-02-15 04:58:18.173623 | orchestrator | eb57c0b680cf registry.osism.tech/kolla/release/designate-mdns:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_mdns 2026-02-15 04:58:18.173641 | orchestrator | d2ed93eabdf4 registry.osism.tech/kolla/release/designate-producer:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_producer 2026-02-15 04:58:18.173652 | orchestrator | 5cad63c49303 registry.osism.tech/kolla/release/designate-central:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_central 2026-02-15 04:58:18.173663 | orchestrator | 43b093a07ebe registry.osism.tech/kolla/release/designate-api:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_api 2026-02-15 04:58:18.173674 | orchestrator | f4d9615ab97c registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20251130 "dumb-init --single-…" 27 minutes ago Up 27 minutes (healthy) designate_backend_bind9 2026-02-15 04:58:18.173688 | orchestrator | 41d4e9f0d67b registry.osism.tech/kolla/release/barbican-worker:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_worker 2026-02-15 04:58:18.173700 | orchestrator | 1d14372e7868 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_keystone_listener 2026-02-15 04:58:18.173713 | orchestrator | 17c376b6ec86 registry.osism.tech/kolla/release/barbican-api:19.0.1.20251130 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) barbican_api 2026-02-15 04:58:18.173725 | orchestrator | 71860c505806 registry.osism.tech/kolla/release/cinder-backup:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_backup 2026-02-15 04:58:18.173738 | orchestrator | 740ce0b78b7f registry.osism.tech/kolla/release/cinder-volume:25.3.1.20251130 "dumb-init --single-…" 31 minutes ago Up 31 minutes (healthy) cinder_volume 2026-02-15 04:58:18.173750 | orchestrator | 5033eb57df7a registry.osism.tech/kolla/release/cinder-scheduler:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_scheduler 2026-02-15 04:58:18.173762 | orchestrator | 0cde5a409086 registry.osism.tech/kolla/release/cinder-api:25.3.1.20251130 "dumb-init --single-…" 32 minutes ago Up 32 minutes (healthy) cinder_api 2026-02-15 04:58:18.173774 | orchestrator | f23c5ce79d4c registry.osism.tech/kolla/release/glance-api:29.0.1.20251130 "dumb-init --single-…" 34 minutes ago Up 34 minutes (healthy) glance_api 2026-02-15 04:58:18.173787 | orchestrator | 6201622ece7b registry.osism.tech/kolla/release/skyline-console:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_console 2026-02-15 04:58:18.173800 | orchestrator | ffdca9de8dc9 registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20251130 "dumb-init --single-…" 37 minutes ago Up 37 minutes (healthy) skyline_apiserver 2026-02-15 04:58:18.173820 | orchestrator | e1da43aea79d registry.osism.tech/kolla/release/horizon:25.1.2.20251130 "dumb-init --single-…" 38 minutes ago Up 38 minutes (healthy) horizon 2026-02-15 04:58:18.173833 | orchestrator | 32d3106e0988 registry.osism.tech/kolla/release/nova-novncproxy:30.2.1.20251130 "dumb-init --single-…" 42 minutes ago Up 42 minutes (healthy) nova_novncproxy 2026-02-15 04:58:18.173852 | orchestrator | bdde6743b055 registry.osism.tech/kolla/release/nova-conductor:30.2.1.20251130 "dumb-init --single-…" 43 minutes ago Up 43 minutes (healthy) nova_conductor 2026-02-15 04:58:18.173865 | orchestrator | 203c2f24ac4e registry.osism.tech/kolla/release/nova-api:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_api 2026-02-15 04:58:18.173884 | orchestrator | e063baea85c1 registry.osism.tech/kolla/release/nova-scheduler:30.2.1.20251130 "dumb-init --single-…" 44 minutes ago Up 44 minutes (healthy) nova_scheduler 2026-02-15 04:58:18.173897 | orchestrator | ae432f070e43 registry.osism.tech/kolla/release/neutron-server:25.2.2.20251130 "dumb-init --single-…" 50 minutes ago Up 50 minutes (healthy) neutron_server 2026-02-15 04:58:18.173909 | orchestrator | 592dbfd4a562 registry.osism.tech/kolla/release/placement-api:12.0.1.20251130 "dumb-init --single-…" 53 minutes ago Up 53 minutes (healthy) placement_api 2026-02-15 04:58:18.173922 | orchestrator | 439cf9cf4761 registry.osism.tech/kolla/release/keystone:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone 2026-02-15 04:58:18.173935 | orchestrator | da7f6e0b4c82 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_fernet 2026-02-15 04:58:18.173948 | orchestrator | bc968c3098c8 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20251130 "dumb-init --single-…" 56 minutes ago Up 56 minutes (healthy) keystone_ssh 2026-02-15 04:58:18.173960 | orchestrator | 65df7a6a4367 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 58 minutes ago Up 58 minutes ceph-mgr-testbed-node-2 2026-02-15 04:58:18.173973 | orchestrator | e9ff0da1d1a7 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" About an hour ago Up About an hour ceph-crash-testbed-node-2 2026-02-15 04:58:18.173985 | orchestrator | 9cffadff9441 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" About an hour ago Up About an hour ceph-mon-testbed-node-2 2026-02-15 04:58:18.174002 | orchestrator | 81eac6817cf7 registry.osism.tech/kolla/release/ovn-northd:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_northd 2026-02-15 04:58:18.174061 | orchestrator | d1cb6f1c9e5b registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_sb_db 2026-02-15 04:58:18.174074 | orchestrator | 8ab13f2552b4 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_nb_db 2026-02-15 04:58:18.174084 | orchestrator | 36b9defc4ca5 registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour ovn_controller 2026-02-15 04:58:18.174095 | orchestrator | 44f41055449f registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_vswitchd 2026-02-15 04:58:18.174106 | orchestrator | 435b05dabd23 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) openvswitch_db 2026-02-15 04:58:18.174116 | orchestrator | 1e52e40359ab registry.osism.tech/kolla/release/rabbitmq:3.13.7.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) rabbitmq 2026-02-15 04:58:18.174134 | orchestrator | a3810861853d registry.osism.tech/kolla/release/mariadb-server:10.11.15.20251130 "dumb-init -- kolla_…" About an hour ago Up About an hour (healthy) mariadb 2026-02-15 04:58:18.174152 | orchestrator | 9d5cc0daf728 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis_sentinel 2026-02-15 04:58:18.174163 | orchestrator | 44bd6319ac0f registry.osism.tech/kolla/release/redis:7.0.15.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) redis 2026-02-15 04:58:18.174174 | orchestrator | 8a6233dd8fcf registry.osism.tech/kolla/release/memcached:1.6.24.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) memcached 2026-02-15 04:58:18.174185 | orchestrator | 59c0d12c372b registry.osism.tech/kolla/release/opensearch-dashboards:2.19.4.20251130 "dumb-init --single-…" About an hour ago Up About an hour (healthy) opensearch_dashboards 2026-02-15 04:58:18.174196 | orchestrator | 0173dacb2c63 registry.osism.tech/kolla/release/opensearch:2.19.4.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) opensearch 2026-02-15 04:58:18.174207 | orchestrator | cb1bb08ec320 registry.osism.tech/kolla/release/keepalived:2.2.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours keepalived 2026-02-15 04:58:18.174217 | orchestrator | 67f2af5f2201 registry.osism.tech/kolla/release/proxysql:3.0.3.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) proxysql 2026-02-15 04:58:18.174228 | orchestrator | b1dce06e870f registry.osism.tech/kolla/release/haproxy:2.8.15.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours (healthy) haproxy 2026-02-15 04:58:18.174239 | orchestrator | b9310499d11d registry.osism.tech/kolla/release/cron:3.0.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours cron 2026-02-15 04:58:18.174249 | orchestrator | 4b9640d20f78 registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours kolla_toolbox 2026-02-15 04:58:18.174260 | orchestrator | cbf453d96922 registry.osism.tech/kolla/release/fluentd:5.0.8.20251130 "dumb-init --single-…" 2 hours ago Up 2 hours fluentd 2026-02-15 04:58:18.485796 | orchestrator | 2026-02-15 04:58:18.485895 | orchestrator | ## Images @ testbed-node-2 2026-02-15 04:58:18.485911 | orchestrator | 2026-02-15 04:58:18.485924 | orchestrator | + echo 2026-02-15 04:58:18.485936 | orchestrator | + echo '## Images @ testbed-node-2' 2026-02-15 04:58:18.485950 | orchestrator | + echo 2026-02-15 04:58:18.485962 | orchestrator | + osism container testbed-node-2 images 2026-02-15 04:58:21.018271 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2026-02-15 04:58:21.018385 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20251130 618df24dfbf4 2 months ago 322MB 2026-02-15 04:58:21.018400 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.24.20251130 8a9865997707 2 months ago 266MB 2026-02-15 04:58:21.018458 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.4.20251130 dc62f23331d2 2 months ago 1.56GB 2026-02-15 04:58:21.018471 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.4.20251130 3b3613dd9b1a 2 months ago 1.53GB 2026-02-15 04:58:21.018483 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.8.20251130 94862d07fc5a 2 months ago 276MB 2026-02-15 04:58:21.018494 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.7.1.20251130 314d22193a72 2 months ago 669MB 2026-02-15 04:58:21.018505 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20251130 e1e0428a330f 2 months ago 265MB 2026-02-15 04:58:21.018516 | orchestrator | registry.osism.tech/kolla/release/grafana 12.3.0.20251130 6eb3b7b1dbf2 2 months ago 1.02GB 2026-02-15 04:58:21.018549 | orchestrator | registry.osism.tech/kolla/release/proxysql 3.0.3.20251130 2c7177938c0e 2 months ago 412MB 2026-02-15 04:58:21.018560 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.8.15.20251130 6d4c583df983 2 months ago 274MB 2026-02-15 04:58:21.018575 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.8.20251130 fb3c98fc8cae 2 months ago 578MB 2026-02-15 04:58:21.018586 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20251130 5548a8ce5b5c 2 months ago 273MB 2026-02-15 04:58:21.018598 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20251130 62d0b016058f 2 months ago 273MB 2026-02-15 04:58:21.018609 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.15.20251130 77db67eebcc3 2 months ago 452MB 2026-02-15 04:58:21.018619 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.2.20251130 d7257ed845e9 2 months ago 1.15GB 2026-02-15 04:58:21.018630 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20251130 aedc672fb472 2 months ago 301MB 2026-02-15 04:58:21.018641 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20251130 7b077076926d 2 months ago 298MB 2026-02-15 04:58:21.018651 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20251130 591cbce746c1 2 months ago 357MB 2026-02-15 04:58:21.018662 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20251130 bcaaf5d64345 2 months ago 292MB 2026-02-15 04:58:21.018673 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20251130 c1ab1d07f7ef 2 months ago 305MB 2026-02-15 04:58:21.018700 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.3.20251130 3e6f3fe8823c 2 months ago 279MB 2026-02-15 04:58:21.018712 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.3.20251130 ad8bb4636454 2 months ago 279MB 2026-02-15 04:58:21.018723 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20251130 20317ff6dfb9 2 months ago 975MB 2026-02-15 04:58:21.018734 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.2.1.20251130 99323056afa4 2 months ago 1.37GB 2026-02-15 04:58:21.018745 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.2.1.20251130 92609e648215 2 months ago 1.21GB 2026-02-15 04:58:21.018756 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.2.1.20251130 2d78e7fdfb9a 2 months ago 1.21GB 2026-02-15 04:58:21.018767 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.2.1.20251130 4c3c59730530 2 months ago 1.21GB 2026-02-15 04:58:21.018805 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.2.20251130 37cb6975d4a5 2 months ago 976MB 2026-02-15 04:58:21.018820 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.2.20251130 bb2927b293dc 2 months ago 976MB 2026-02-15 04:58:21.018832 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20251130 a85fdbb4bbba 2 months ago 1.13GB 2026-02-15 04:58:21.018845 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20251130 a98ee1099aad 2 months ago 1.24GB 2026-02-15 04:58:21.018876 | orchestrator | registry.osism.tech/kolla/release/manila-share 19.1.1.20251130 df44f491f2c1 2 months ago 1.22GB 2026-02-15 04:58:21.018889 | orchestrator | registry.osism.tech/kolla/release/manila-data 19.1.1.20251130 cd8b74c8a47a 2 months ago 1.06GB 2026-02-15 04:58:21.018902 | orchestrator | registry.osism.tech/kolla/release/manila-api 19.1.1.20251130 654f9bd3c940 2 months ago 1.05GB 2026-02-15 04:58:21.018930 | orchestrator | registry.osism.tech/kolla/release/manila-scheduler 19.1.1.20251130 e0864fa03a78 2 months ago 1.05GB 2026-02-15 04:58:21.018944 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20251130 1e68c23a9d38 2 months ago 974MB 2026-02-15 04:58:21.018956 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20251130 1726a7592f93 2 months ago 974MB 2026-02-15 04:58:21.018969 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20251130 abbd6e9f87e2 2 months ago 974MB 2026-02-15 04:58:21.018982 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20251130 82a64f1d056d 2 months ago 973MB 2026-02-15 04:58:21.018995 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20251130 2cef5d51872b 2 months ago 991MB 2026-02-15 04:58:21.019007 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20251130 bfcd8631a126 2 months ago 991MB 2026-02-15 04:58:21.019020 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20251130 9195ddc3e4c5 2 months ago 990MB 2026-02-15 04:58:21.019033 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20251130 6c1543e94c06 2 months ago 1.09GB 2026-02-15 04:58:21.019046 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20251130 36669c355898 2 months ago 1.04GB 2026-02-15 04:58:21.019059 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20251130 e002cffc8eb8 2 months ago 1.04GB 2026-02-15 04:58:21.019072 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.2.20251130 059dc6d4a159 2 months ago 1.03GB 2026-02-15 04:58:21.019084 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.2.20251130 c9059accdc4a 2 months ago 1.03GB 2026-02-15 04:58:21.019097 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.2.20251130 9375641bed7a 2 months ago 1.05GB 2026-02-15 04:58:21.019110 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.2.20251130 708f50e37fa7 2 months ago 1.03GB 2026-02-15 04:58:21.019123 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.2.20251130 045f928baedc 2 months ago 1.05GB 2026-02-15 04:58:21.019136 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.2.20251130 fa71fe0a109e 2 months ago 1.16GB 2026-02-15 04:58:21.019148 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20251130 b1fcfbc49057 2 months ago 1.1GB 2026-02-15 04:58:21.019159 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20251130 00b6af03994a 2 months ago 983MB 2026-02-15 04:58:21.019170 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20251130 18bc80370e46 2 months ago 989MB 2026-02-15 04:58:21.019181 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20251130 eac4506bf51f 2 months ago 984MB 2026-02-15 04:58:21.019192 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20251130 ad5d5cd1392a 2 months ago 984MB 2026-02-15 04:58:21.019203 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20251130 4e19a1dc9c8a 2 months ago 989MB 2026-02-15 04:58:21.019214 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20251130 4ad9e0017d6e 2 months ago 984MB 2026-02-15 04:58:21.019224 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20251130 20430a0acd38 2 months ago 1.05GB 2026-02-15 04:58:21.019235 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20251130 20bbe1600b66 2 months ago 990MB 2026-02-15 04:58:21.019253 | orchestrator | registry.osism.tech/kolla/release/cinder-volume 25.3.1.20251130 ab7ee3c06214 2 months ago 1.72GB 2026-02-15 04:58:21.019264 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.3.1.20251130 47d31cd2c25d 2 months ago 1.4GB 2026-02-15 04:58:21.019275 | orchestrator | registry.osism.tech/kolla/release/cinder-backup 25.3.1.20251130 c09074b62f18 2 months ago 1.41GB 2026-02-15 04:58:21.019293 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.3.1.20251130 ceaaac81e8af 2 months ago 1.4GB 2026-02-15 04:58:21.019305 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.3.20251130 e52b6499881a 2 months ago 840MB 2026-02-15 04:58:21.019316 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.3.20251130 fcd09e53d925 2 months ago 840MB 2026-02-15 04:58:21.019327 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.3.20251130 2fcefdb5b030 2 months ago 840MB 2026-02-15 04:58:21.019338 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.3.20251130 948e5d22de86 2 months ago 840MB 2026-02-15 04:58:21.019349 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 9 months ago 1.27GB 2026-02-15 04:58:21.336079 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2026-02-15 04:58:21.342148 | orchestrator | + set -e 2026-02-15 04:58:21.342246 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 04:58:21.342273 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 04:58:21.342292 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 04:58:21.342311 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 04:58:21.342323 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 04:58:21.342334 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 04:58:21.342352 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 04:58:21.342371 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 04:58:21.342389 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 04:58:21.342405 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 04:58:21.342448 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 04:58:21.342467 | orchestrator | ++ export ARA=false 2026-02-15 04:58:21.342485 | orchestrator | ++ ARA=false 2026-02-15 04:58:21.342504 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 04:58:21.342523 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 04:58:21.342537 | orchestrator | ++ export TEMPEST=false 2026-02-15 04:58:21.342555 | orchestrator | ++ TEMPEST=false 2026-02-15 04:58:21.342574 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 04:58:21.342592 | orchestrator | ++ IS_ZUUL=true 2026-02-15 04:58:21.342610 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 04:58:21.342628 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 04:58:21.342731 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 04:58:21.342756 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 04:58:21.342774 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 04:58:21.342793 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 04:58:21.342813 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 04:58:21.342833 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 04:58:21.342851 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 04:58:21.342869 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 04:58:21.342888 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-15 04:58:21.342907 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2026-02-15 04:58:21.352854 | orchestrator | + set -e 2026-02-15 04:58:21.352905 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 04:58:21.352912 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 04:58:21.352918 | orchestrator | ++ INTERACTIVE=false 2026-02-15 04:58:21.352922 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 04:58:21.352926 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 04:58:21.352937 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-15 04:58:21.354597 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-15 04:58:21.362702 | orchestrator | 2026-02-15 04:58:21.362739 | orchestrator | # Ceph status 2026-02-15 04:58:21.362744 | orchestrator | 2026-02-15 04:58:21.362748 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 04:58:21.362755 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 04:58:21.362783 | orchestrator | + echo 2026-02-15 04:58:21.362788 | orchestrator | + echo '# Ceph status' 2026-02-15 04:58:21.362792 | orchestrator | + echo 2026-02-15 04:58:21.362795 | orchestrator | + ceph -s 2026-02-15 04:58:21.972909 | orchestrator | cluster: 2026-02-15 04:58:21.973021 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2026-02-15 04:58:21.973039 | orchestrator | health: HEALTH_OK 2026-02-15 04:58:21.973052 | orchestrator | 2026-02-15 04:58:21.973064 | orchestrator | services: 2026-02-15 04:58:21.973087 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 71m) 2026-02-15 04:58:21.973109 | orchestrator | mgr: testbed-node-2(active, since 58m), standbys: testbed-node-0, testbed-node-1 2026-02-15 04:58:21.973163 | orchestrator | mds: 1/1 daemons up, 2 standby 2026-02-15 04:58:21.973179 | orchestrator | osd: 6 osds: 6 up (since 67m), 6 in (since 68m) 2026-02-15 04:58:21.973190 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2026-02-15 04:58:21.973202 | orchestrator | 2026-02-15 04:58:21.973213 | orchestrator | data: 2026-02-15 04:58:21.973224 | orchestrator | volumes: 1/1 healthy 2026-02-15 04:58:21.973235 | orchestrator | pools: 14 pools, 401 pgs 2026-02-15 04:58:21.973247 | orchestrator | objects: 556 objects, 2.2 GiB 2026-02-15 04:58:21.973258 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2026-02-15 04:58:21.973269 | orchestrator | pgs: 401 active+clean 2026-02-15 04:58:21.973281 | orchestrator | 2026-02-15 04:58:22.020919 | orchestrator | 2026-02-15 04:58:22.021022 | orchestrator | # Ceph versions 2026-02-15 04:58:22.021035 | orchestrator | 2026-02-15 04:58:22.021043 | orchestrator | + echo 2026-02-15 04:58:22.021053 | orchestrator | + echo '# Ceph versions' 2026-02-15 04:58:22.021062 | orchestrator | + echo 2026-02-15 04:58:22.021070 | orchestrator | + ceph versions 2026-02-15 04:58:22.625348 | orchestrator | { 2026-02-15 04:58:22.625550 | orchestrator | "mon": { 2026-02-15 04:58:22.625581 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-15 04:58:22.625601 | orchestrator | }, 2026-02-15 04:58:22.625621 | orchestrator | "mgr": { 2026-02-15 04:58:22.625640 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-15 04:58:22.625660 | orchestrator | }, 2026-02-15 04:58:22.625678 | orchestrator | "osd": { 2026-02-15 04:58:22.625698 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2026-02-15 04:58:22.625712 | orchestrator | }, 2026-02-15 04:58:22.625723 | orchestrator | "mds": { 2026-02-15 04:58:22.625734 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-15 04:58:22.625745 | orchestrator | }, 2026-02-15 04:58:22.625756 | orchestrator | "rgw": { 2026-02-15 04:58:22.625767 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2026-02-15 04:58:22.625778 | orchestrator | }, 2026-02-15 04:58:22.625789 | orchestrator | "overall": { 2026-02-15 04:58:22.625800 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2026-02-15 04:58:22.625811 | orchestrator | } 2026-02-15 04:58:22.625822 | orchestrator | } 2026-02-15 04:58:22.688160 | orchestrator | 2026-02-15 04:58:22.688275 | orchestrator | # Ceph OSD tree 2026-02-15 04:58:22.688300 | orchestrator | 2026-02-15 04:58:22.688317 | orchestrator | + echo 2026-02-15 04:58:22.688364 | orchestrator | + echo '# Ceph OSD tree' 2026-02-15 04:58:22.688375 | orchestrator | + echo 2026-02-15 04:58:22.688385 | orchestrator | + ceph osd df tree 2026-02-15 04:58:23.255711 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2026-02-15 04:58:23.255850 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 406 MiB 113 GiB 5.90 1.00 - root default 2026-02-15 04:58:23.255866 | orchestrator | -7 0.03897 - 40 GiB 2.3 GiB 2.2 GiB 2 KiB 127 MiB 38 GiB 5.88 1.00 - host testbed-node-3 2026-02-15 04:58:23.255878 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.3 GiB 1.3 GiB 1 KiB 62 MiB 19 GiB 6.69 1.13 175 up osd.0 2026-02-15 04:58:23.255889 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.0 GiB 971 MiB 1 KiB 66 MiB 19 GiB 5.06 0.86 213 up osd.3 2026-02-15 04:58:23.255918 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.90 1.00 - host testbed-node-4 2026-02-15 04:58:23.255937 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.90 1.17 209 up osd.1 2026-02-15 04:58:23.255989 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1004 MiB 939 MiB 1 KiB 66 MiB 19 GiB 4.91 0.83 181 up osd.5 2026-02-15 04:58:23.256008 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 139 MiB 38 GiB 5.91 1.00 - host testbed-node-5 2026-02-15 04:58:23.256027 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.5 GiB 1.4 GiB 1 KiB 66 MiB 18 GiB 7.57 1.28 203 up osd.2 2026-02-15 04:58:23.256046 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 868 MiB 795 MiB 1 KiB 74 MiB 19 GiB 4.24 0.72 189 up osd.4 2026-02-15 04:58:23.256064 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 406 MiB 113 GiB 5.90 2026-02-15 04:58:23.256083 | orchestrator | MIN/MAX VAR: 0.72/1.28 STDDEV: 1.21 2026-02-15 04:58:23.299759 | orchestrator | 2026-02-15 04:58:23.299861 | orchestrator | # Ceph monitor status 2026-02-15 04:58:23.299879 | orchestrator | 2026-02-15 04:58:23.299891 | orchestrator | + echo 2026-02-15 04:58:23.299903 | orchestrator | + echo '# Ceph monitor status' 2026-02-15 04:58:23.299914 | orchestrator | + echo 2026-02-15 04:58:23.299925 | orchestrator | + ceph mon stat 2026-02-15 04:58:23.889455 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 10, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2026-02-15 04:58:23.935938 | orchestrator | 2026-02-15 04:58:23.936057 | orchestrator | # Ceph quorum status 2026-02-15 04:58:23.936075 | orchestrator | 2026-02-15 04:58:23.936087 | orchestrator | + echo 2026-02-15 04:58:23.936099 | orchestrator | + echo '# Ceph quorum status' 2026-02-15 04:58:23.936110 | orchestrator | + echo 2026-02-15 04:58:23.936758 | orchestrator | + ceph quorum_status 2026-02-15 04:58:23.936801 | orchestrator | + jq 2026-02-15 04:58:24.578307 | orchestrator | { 2026-02-15 04:58:24.578408 | orchestrator | "election_epoch": 10, 2026-02-15 04:58:24.578477 | orchestrator | "quorum": [ 2026-02-15 04:58:24.578491 | orchestrator | 0, 2026-02-15 04:58:24.578502 | orchestrator | 1, 2026-02-15 04:58:24.578513 | orchestrator | 2 2026-02-15 04:58:24.578524 | orchestrator | ], 2026-02-15 04:58:24.578534 | orchestrator | "quorum_names": [ 2026-02-15 04:58:24.578546 | orchestrator | "testbed-node-0", 2026-02-15 04:58:24.578557 | orchestrator | "testbed-node-1", 2026-02-15 04:58:24.578568 | orchestrator | "testbed-node-2" 2026-02-15 04:58:24.578579 | orchestrator | ], 2026-02-15 04:58:24.578590 | orchestrator | "quorum_leader_name": "testbed-node-0", 2026-02-15 04:58:24.578603 | orchestrator | "quorum_age": 4264, 2026-02-15 04:58:24.578613 | orchestrator | "features": { 2026-02-15 04:58:24.578624 | orchestrator | "quorum_con": "4540138322906710015", 2026-02-15 04:58:24.578635 | orchestrator | "quorum_mon": [ 2026-02-15 04:58:24.578646 | orchestrator | "kraken", 2026-02-15 04:58:24.578657 | orchestrator | "luminous", 2026-02-15 04:58:24.578667 | orchestrator | "mimic", 2026-02-15 04:58:24.578679 | orchestrator | "osdmap-prune", 2026-02-15 04:58:24.578689 | orchestrator | "nautilus", 2026-02-15 04:58:24.578700 | orchestrator | "octopus", 2026-02-15 04:58:24.578710 | orchestrator | "pacific", 2026-02-15 04:58:24.578721 | orchestrator | "elector-pinging", 2026-02-15 04:58:24.578731 | orchestrator | "quincy", 2026-02-15 04:58:24.578742 | orchestrator | "reef" 2026-02-15 04:58:24.578752 | orchestrator | ] 2026-02-15 04:58:24.578763 | orchestrator | }, 2026-02-15 04:58:24.578774 | orchestrator | "monmap": { 2026-02-15 04:58:24.578785 | orchestrator | "epoch": 1, 2026-02-15 04:58:24.578796 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2026-02-15 04:58:24.578807 | orchestrator | "modified": "2026-02-15T03:46:52.484879Z", 2026-02-15 04:58:24.578819 | orchestrator | "created": "2026-02-15T03:46:52.484879Z", 2026-02-15 04:58:24.578830 | orchestrator | "min_mon_release": 18, 2026-02-15 04:58:24.578841 | orchestrator | "min_mon_release_name": "reef", 2026-02-15 04:58:24.578851 | orchestrator | "election_strategy": 1, 2026-02-15 04:58:24.578862 | orchestrator | "disallowed_leaders: ": "", 2026-02-15 04:58:24.578873 | orchestrator | "stretch_mode": false, 2026-02-15 04:58:24.578883 | orchestrator | "tiebreaker_mon": "", 2026-02-15 04:58:24.578894 | orchestrator | "removed_ranks: ": "", 2026-02-15 04:58:24.578904 | orchestrator | "features": { 2026-02-15 04:58:24.578969 | orchestrator | "persistent": [ 2026-02-15 04:58:24.578981 | orchestrator | "kraken", 2026-02-15 04:58:24.578991 | orchestrator | "luminous", 2026-02-15 04:58:24.579002 | orchestrator | "mimic", 2026-02-15 04:58:24.579013 | orchestrator | "osdmap-prune", 2026-02-15 04:58:24.579023 | orchestrator | "nautilus", 2026-02-15 04:58:24.579034 | orchestrator | "octopus", 2026-02-15 04:58:24.579044 | orchestrator | "pacific", 2026-02-15 04:58:24.579055 | orchestrator | "elector-pinging", 2026-02-15 04:58:24.579065 | orchestrator | "quincy", 2026-02-15 04:58:24.579076 | orchestrator | "reef" 2026-02-15 04:58:24.579087 | orchestrator | ], 2026-02-15 04:58:24.579098 | orchestrator | "optional": [] 2026-02-15 04:58:24.579108 | orchestrator | }, 2026-02-15 04:58:24.579119 | orchestrator | "mons": [ 2026-02-15 04:58:24.579130 | orchestrator | { 2026-02-15 04:58:24.579140 | orchestrator | "rank": 0, 2026-02-15 04:58:24.579151 | orchestrator | "name": "testbed-node-0", 2026-02-15 04:58:24.579162 | orchestrator | "public_addrs": { 2026-02-15 04:58:24.579173 | orchestrator | "addrvec": [ 2026-02-15 04:58:24.579183 | orchestrator | { 2026-02-15 04:58:24.579194 | orchestrator | "type": "v2", 2026-02-15 04:58:24.579205 | orchestrator | "addr": "192.168.16.10:3300", 2026-02-15 04:58:24.579216 | orchestrator | "nonce": 0 2026-02-15 04:58:24.579227 | orchestrator | }, 2026-02-15 04:58:24.579238 | orchestrator | { 2026-02-15 04:58:24.579248 | orchestrator | "type": "v1", 2026-02-15 04:58:24.579259 | orchestrator | "addr": "192.168.16.10:6789", 2026-02-15 04:58:24.579269 | orchestrator | "nonce": 0 2026-02-15 04:58:24.579280 | orchestrator | } 2026-02-15 04:58:24.579294 | orchestrator | ] 2026-02-15 04:58:24.579312 | orchestrator | }, 2026-02-15 04:58:24.579334 | orchestrator | "addr": "192.168.16.10:6789/0", 2026-02-15 04:58:24.579360 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2026-02-15 04:58:24.579377 | orchestrator | "priority": 0, 2026-02-15 04:58:24.579394 | orchestrator | "weight": 0, 2026-02-15 04:58:24.579469 | orchestrator | "crush_location": "{}" 2026-02-15 04:58:24.579491 | orchestrator | }, 2026-02-15 04:58:24.579509 | orchestrator | { 2026-02-15 04:58:24.579526 | orchestrator | "rank": 1, 2026-02-15 04:58:24.579544 | orchestrator | "name": "testbed-node-1", 2026-02-15 04:58:24.579562 | orchestrator | "public_addrs": { 2026-02-15 04:58:24.579580 | orchestrator | "addrvec": [ 2026-02-15 04:58:24.579599 | orchestrator | { 2026-02-15 04:58:24.579616 | orchestrator | "type": "v2", 2026-02-15 04:58:24.579633 | orchestrator | "addr": "192.168.16.11:3300", 2026-02-15 04:58:24.579644 | orchestrator | "nonce": 0 2026-02-15 04:58:24.579655 | orchestrator | }, 2026-02-15 04:58:24.579666 | orchestrator | { 2026-02-15 04:58:24.579676 | orchestrator | "type": "v1", 2026-02-15 04:58:24.579687 | orchestrator | "addr": "192.168.16.11:6789", 2026-02-15 04:58:24.579698 | orchestrator | "nonce": 0 2026-02-15 04:58:24.579708 | orchestrator | } 2026-02-15 04:58:24.579719 | orchestrator | ] 2026-02-15 04:58:24.579729 | orchestrator | }, 2026-02-15 04:58:24.579740 | orchestrator | "addr": "192.168.16.11:6789/0", 2026-02-15 04:58:24.579751 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2026-02-15 04:58:24.579762 | orchestrator | "priority": 0, 2026-02-15 04:58:24.579773 | orchestrator | "weight": 0, 2026-02-15 04:58:24.579783 | orchestrator | "crush_location": "{}" 2026-02-15 04:58:24.579794 | orchestrator | }, 2026-02-15 04:58:24.579805 | orchestrator | { 2026-02-15 04:58:24.579815 | orchestrator | "rank": 2, 2026-02-15 04:58:24.579826 | orchestrator | "name": "testbed-node-2", 2026-02-15 04:58:24.579837 | orchestrator | "public_addrs": { 2026-02-15 04:58:24.579848 | orchestrator | "addrvec": [ 2026-02-15 04:58:24.579859 | orchestrator | { 2026-02-15 04:58:24.579869 | orchestrator | "type": "v2", 2026-02-15 04:58:24.579880 | orchestrator | "addr": "192.168.16.12:3300", 2026-02-15 04:58:24.579891 | orchestrator | "nonce": 0 2026-02-15 04:58:24.579901 | orchestrator | }, 2026-02-15 04:58:24.579912 | orchestrator | { 2026-02-15 04:58:24.579923 | orchestrator | "type": "v1", 2026-02-15 04:58:24.579934 | orchestrator | "addr": "192.168.16.12:6789", 2026-02-15 04:58:24.579945 | orchestrator | "nonce": 0 2026-02-15 04:58:24.579955 | orchestrator | } 2026-02-15 04:58:24.579966 | orchestrator | ] 2026-02-15 04:58:24.579977 | orchestrator | }, 2026-02-15 04:58:24.579988 | orchestrator | "addr": "192.168.16.12:6789/0", 2026-02-15 04:58:24.580012 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2026-02-15 04:58:24.580023 | orchestrator | "priority": 0, 2026-02-15 04:58:24.580034 | orchestrator | "weight": 0, 2026-02-15 04:58:24.580044 | orchestrator | "crush_location": "{}" 2026-02-15 04:58:24.580055 | orchestrator | } 2026-02-15 04:58:24.580065 | orchestrator | ] 2026-02-15 04:58:24.580076 | orchestrator | } 2026-02-15 04:58:24.580087 | orchestrator | } 2026-02-15 04:58:24.580098 | orchestrator | 2026-02-15 04:58:24.580109 | orchestrator | # Ceph free space status 2026-02-15 04:58:24.580120 | orchestrator | 2026-02-15 04:58:24.580131 | orchestrator | + echo 2026-02-15 04:58:24.580142 | orchestrator | + echo '# Ceph free space status' 2026-02-15 04:58:24.580153 | orchestrator | + echo 2026-02-15 04:58:24.580164 | orchestrator | + ceph df 2026-02-15 04:58:25.239537 | orchestrator | --- RAW STORAGE --- 2026-02-15 04:58:25.239653 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2026-02-15 04:58:25.239671 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-02-15 04:58:25.239678 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.90 2026-02-15 04:58:25.239685 | orchestrator | 2026-02-15 04:58:25.239692 | orchestrator | --- POOLS --- 2026-02-15 04:58:25.239700 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2026-02-15 04:58:25.239708 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2026-02-15 04:58:25.239715 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2026-02-15 04:58:25.239721 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2026-02-15 04:58:25.239728 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2026-02-15 04:58:25.239734 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2026-02-15 04:58:25.239741 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2026-02-15 04:58:25.239748 | orchestrator | default.rgw.log 7 32 3.6 KiB 209 408 KiB 0 35 GiB 2026-02-15 04:58:25.239754 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2026-02-15 04:58:25.239761 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2026-02-15 04:58:25.239767 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2026-02-15 04:58:25.239774 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2026-02-15 04:58:25.239780 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.97 35 GiB 2026-02-15 04:58:25.239787 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2026-02-15 04:58:25.239793 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2026-02-15 04:58:25.287030 | orchestrator | ++ semver 9.5.0 5.0.0 2026-02-15 04:58:25.342584 | orchestrator | + [[ 1 -eq -1 ]] 2026-02-15 04:58:25.342674 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2026-02-15 04:58:25.342689 | orchestrator | + osism apply facts 2026-02-15 04:58:27.356363 | orchestrator | 2026-02-15 04:58:27 | INFO  | Task 098ae949-5739-4ae2-b01f-215643f13a8f (facts) was prepared for execution. 2026-02-15 04:58:27.356531 | orchestrator | 2026-02-15 04:58:27 | INFO  | It takes a moment until task 098ae949-5739-4ae2-b01f-215643f13a8f (facts) has been started and output is visible here. 2026-02-15 04:58:41.283066 | orchestrator | 2026-02-15 04:58:41.283183 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-15 04:58:41.283200 | orchestrator | 2026-02-15 04:58:41.283213 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-15 04:58:41.283224 | orchestrator | Sunday 15 February 2026 04:58:31 +0000 (0:00:00.282) 0:00:00.282 ******* 2026-02-15 04:58:41.283235 | orchestrator | ok: [testbed-manager] 2026-02-15 04:58:41.283248 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:58:41.283259 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:58:41.283270 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:58:41.283280 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:58:41.283291 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:58:41.283301 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:58:41.283312 | orchestrator | 2026-02-15 04:58:41.283348 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-15 04:58:41.283360 | orchestrator | Sunday 15 February 2026 04:58:33 +0000 (0:00:01.181) 0:00:01.464 ******* 2026-02-15 04:58:41.283371 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:58:41.283383 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:58:41.283394 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:58:41.283404 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:58:41.283415 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:58:41.283454 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:58:41.283465 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:58:41.283475 | orchestrator | 2026-02-15 04:58:41.283486 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-15 04:58:41.283497 | orchestrator | 2026-02-15 04:58:41.283508 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-15 04:58:41.283519 | orchestrator | Sunday 15 February 2026 04:58:34 +0000 (0:00:01.364) 0:00:02.828 ******* 2026-02-15 04:58:41.283530 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:58:41.283540 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:58:41.283551 | orchestrator | ok: [testbed-manager] 2026-02-15 04:58:41.283561 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:58:41.283572 | orchestrator | ok: [testbed-node-3] 2026-02-15 04:58:41.283662 | orchestrator | ok: [testbed-node-4] 2026-02-15 04:58:41.283676 | orchestrator | ok: [testbed-node-5] 2026-02-15 04:58:41.283688 | orchestrator | 2026-02-15 04:58:41.283701 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-15 04:58:41.283713 | orchestrator | 2026-02-15 04:58:41.283726 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-15 04:58:41.283739 | orchestrator | Sunday 15 February 2026 04:58:40 +0000 (0:00:05.772) 0:00:08.600 ******* 2026-02-15 04:58:41.283751 | orchestrator | skipping: [testbed-manager] 2026-02-15 04:58:41.283763 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:58:41.283776 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:58:41.283787 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:58:41.283800 | orchestrator | skipping: [testbed-node-3] 2026-02-15 04:58:41.283812 | orchestrator | skipping: [testbed-node-4] 2026-02-15 04:58:41.283824 | orchestrator | skipping: [testbed-node-5] 2026-02-15 04:58:41.283836 | orchestrator | 2026-02-15 04:58:41.283849 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:58:41.283862 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:58:41.283875 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:58:41.283888 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:58:41.283901 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:58:41.283913 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:58:41.283925 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:58:41.283937 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:58:41.283950 | orchestrator | 2026-02-15 04:58:41.283962 | orchestrator | 2026-02-15 04:58:41.283974 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:58:41.283986 | orchestrator | Sunday 15 February 2026 04:58:40 +0000 (0:00:00.633) 0:00:09.233 ******* 2026-02-15 04:58:41.283998 | orchestrator | =============================================================================== 2026-02-15 04:58:41.284019 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.77s 2026-02-15 04:58:41.284030 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2026-02-15 04:58:41.284042 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2026-02-15 04:58:41.284052 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.63s 2026-02-15 04:58:41.610532 | orchestrator | + osism validate ceph-mons 2026-02-15 04:59:14.503781 | orchestrator | 2026-02-15 04:59:14.503887 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2026-02-15 04:59:14.503904 | orchestrator | 2026-02-15 04:59:14.503916 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-15 04:59:14.503928 | orchestrator | Sunday 15 February 2026 04:58:58 +0000 (0:00:00.437) 0:00:00.437 ******* 2026-02-15 04:59:14.503939 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:14.503950 | orchestrator | 2026-02-15 04:59:14.503961 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-15 04:59:14.503972 | orchestrator | Sunday 15 February 2026 04:58:59 +0000 (0:00:00.817) 0:00:01.255 ******* 2026-02-15 04:59:14.503984 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:14.503994 | orchestrator | 2026-02-15 04:59:14.504005 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-15 04:59:14.504016 | orchestrator | Sunday 15 February 2026 04:59:00 +0000 (0:00:01.020) 0:00:02.275 ******* 2026-02-15 04:59:14.504027 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.504039 | orchestrator | 2026-02-15 04:59:14.504050 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-15 04:59:14.504061 | orchestrator | Sunday 15 February 2026 04:59:00 +0000 (0:00:00.136) 0:00:02.412 ******* 2026-02-15 04:59:14.504071 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.504082 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:59:14.504093 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:59:14.504103 | orchestrator | 2026-02-15 04:59:14.504114 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-15 04:59:14.504125 | orchestrator | Sunday 15 February 2026 04:59:01 +0000 (0:00:00.299) 0:00:02.712 ******* 2026-02-15 04:59:14.504153 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:59:14.504165 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:59:14.504176 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.504186 | orchestrator | 2026-02-15 04:59:14.504197 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-15 04:59:14.504208 | orchestrator | Sunday 15 February 2026 04:59:02 +0000 (0:00:00.990) 0:00:03.703 ******* 2026-02-15 04:59:14.504219 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.504230 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:59:14.504241 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:59:14.504251 | orchestrator | 2026-02-15 04:59:14.504262 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-15 04:59:14.504273 | orchestrator | Sunday 15 February 2026 04:59:02 +0000 (0:00:00.309) 0:00:04.012 ******* 2026-02-15 04:59:14.504284 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.504295 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:59:14.504308 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:59:14.504320 | orchestrator | 2026-02-15 04:59:14.504332 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-15 04:59:14.504345 | orchestrator | Sunday 15 February 2026 04:59:02 +0000 (0:00:00.526) 0:00:04.538 ******* 2026-02-15 04:59:14.504357 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.504369 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:59:14.504381 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:59:14.504394 | orchestrator | 2026-02-15 04:59:14.504406 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2026-02-15 04:59:14.504419 | orchestrator | Sunday 15 February 2026 04:59:03 +0000 (0:00:00.337) 0:00:04.876 ******* 2026-02-15 04:59:14.504480 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.504495 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:59:14.504508 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:59:14.504520 | orchestrator | 2026-02-15 04:59:14.504533 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2026-02-15 04:59:14.504546 | orchestrator | Sunday 15 February 2026 04:59:03 +0000 (0:00:00.297) 0:00:05.173 ******* 2026-02-15 04:59:14.504559 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.504571 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:59:14.504583 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:59:14.504596 | orchestrator | 2026-02-15 04:59:14.504608 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-15 04:59:14.504619 | orchestrator | Sunday 15 February 2026 04:59:03 +0000 (0:00:00.489) 0:00:05.663 ******* 2026-02-15 04:59:14.504630 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.504640 | orchestrator | 2026-02-15 04:59:14.504656 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-15 04:59:14.504667 | orchestrator | Sunday 15 February 2026 04:59:04 +0000 (0:00:00.259) 0:00:05.922 ******* 2026-02-15 04:59:14.504678 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.504689 | orchestrator | 2026-02-15 04:59:14.504700 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-15 04:59:14.504710 | orchestrator | Sunday 15 February 2026 04:59:04 +0000 (0:00:00.244) 0:00:06.167 ******* 2026-02-15 04:59:14.504721 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.504731 | orchestrator | 2026-02-15 04:59:14.504742 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:14.504753 | orchestrator | Sunday 15 February 2026 04:59:04 +0000 (0:00:00.253) 0:00:06.420 ******* 2026-02-15 04:59:14.504763 | orchestrator | 2026-02-15 04:59:14.504774 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:14.504784 | orchestrator | Sunday 15 February 2026 04:59:04 +0000 (0:00:00.071) 0:00:06.492 ******* 2026-02-15 04:59:14.504795 | orchestrator | 2026-02-15 04:59:14.504805 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:14.504816 | orchestrator | Sunday 15 February 2026 04:59:04 +0000 (0:00:00.095) 0:00:06.587 ******* 2026-02-15 04:59:14.504826 | orchestrator | 2026-02-15 04:59:14.504837 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-15 04:59:14.504848 | orchestrator | Sunday 15 February 2026 04:59:04 +0000 (0:00:00.074) 0:00:06.662 ******* 2026-02-15 04:59:14.504858 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.504869 | orchestrator | 2026-02-15 04:59:14.504880 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-15 04:59:14.504890 | orchestrator | Sunday 15 February 2026 04:59:05 +0000 (0:00:00.243) 0:00:06.905 ******* 2026-02-15 04:59:14.504901 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.504912 | orchestrator | 2026-02-15 04:59:14.504939 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2026-02-15 04:59:14.504951 | orchestrator | Sunday 15 February 2026 04:59:05 +0000 (0:00:00.283) 0:00:07.188 ******* 2026-02-15 04:59:14.504962 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.504973 | orchestrator | 2026-02-15 04:59:14.504983 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2026-02-15 04:59:14.504994 | orchestrator | Sunday 15 February 2026 04:59:05 +0000 (0:00:00.135) 0:00:07.324 ******* 2026-02-15 04:59:14.505005 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:59:14.505020 | orchestrator | 2026-02-15 04:59:14.505031 | orchestrator | TASK [Set quorum test data] **************************************************** 2026-02-15 04:59:14.505042 | orchestrator | Sunday 15 February 2026 04:59:07 +0000 (0:00:01.587) 0:00:08.912 ******* 2026-02-15 04:59:14.505052 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.505063 | orchestrator | 2026-02-15 04:59:14.505074 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2026-02-15 04:59:14.505084 | orchestrator | Sunday 15 February 2026 04:59:07 +0000 (0:00:00.501) 0:00:09.413 ******* 2026-02-15 04:59:14.505104 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.505115 | orchestrator | 2026-02-15 04:59:14.505126 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2026-02-15 04:59:14.505136 | orchestrator | Sunday 15 February 2026 04:59:07 +0000 (0:00:00.131) 0:00:09.545 ******* 2026-02-15 04:59:14.505147 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.505158 | orchestrator | 2026-02-15 04:59:14.505169 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2026-02-15 04:59:14.505179 | orchestrator | Sunday 15 February 2026 04:59:08 +0000 (0:00:00.323) 0:00:09.869 ******* 2026-02-15 04:59:14.505190 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.505201 | orchestrator | 2026-02-15 04:59:14.505212 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2026-02-15 04:59:14.505222 | orchestrator | Sunday 15 February 2026 04:59:08 +0000 (0:00:00.307) 0:00:10.176 ******* 2026-02-15 04:59:14.505233 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.505244 | orchestrator | 2026-02-15 04:59:14.505254 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2026-02-15 04:59:14.505265 | orchestrator | Sunday 15 February 2026 04:59:08 +0000 (0:00:00.117) 0:00:10.294 ******* 2026-02-15 04:59:14.505276 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.505286 | orchestrator | 2026-02-15 04:59:14.505297 | orchestrator | TASK [Prepare status test vars] ************************************************ 2026-02-15 04:59:14.505308 | orchestrator | Sunday 15 February 2026 04:59:08 +0000 (0:00:00.146) 0:00:10.440 ******* 2026-02-15 04:59:14.505319 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.505329 | orchestrator | 2026-02-15 04:59:14.505340 | orchestrator | TASK [Gather status data] ****************************************************** 2026-02-15 04:59:14.505351 | orchestrator | Sunday 15 February 2026 04:59:08 +0000 (0:00:00.136) 0:00:10.576 ******* 2026-02-15 04:59:14.505361 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:59:14.505413 | orchestrator | 2026-02-15 04:59:14.505426 | orchestrator | TASK [Set health test data] **************************************************** 2026-02-15 04:59:14.505438 | orchestrator | Sunday 15 February 2026 04:59:10 +0000 (0:00:01.371) 0:00:11.948 ******* 2026-02-15 04:59:14.505468 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.505479 | orchestrator | 2026-02-15 04:59:14.505491 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2026-02-15 04:59:14.505502 | orchestrator | Sunday 15 February 2026 04:59:10 +0000 (0:00:00.342) 0:00:12.290 ******* 2026-02-15 04:59:14.505512 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.505523 | orchestrator | 2026-02-15 04:59:14.505534 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2026-02-15 04:59:14.505545 | orchestrator | Sunday 15 February 2026 04:59:10 +0000 (0:00:00.151) 0:00:12.441 ******* 2026-02-15 04:59:14.505556 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:14.505566 | orchestrator | 2026-02-15 04:59:14.505577 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2026-02-15 04:59:14.505588 | orchestrator | Sunday 15 February 2026 04:59:10 +0000 (0:00:00.148) 0:00:12.589 ******* 2026-02-15 04:59:14.505599 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.505610 | orchestrator | 2026-02-15 04:59:14.505621 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2026-02-15 04:59:14.505638 | orchestrator | Sunday 15 February 2026 04:59:11 +0000 (0:00:00.153) 0:00:12.743 ******* 2026-02-15 04:59:14.505649 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.505660 | orchestrator | 2026-02-15 04:59:14.505671 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-15 04:59:14.505682 | orchestrator | Sunday 15 February 2026 04:59:11 +0000 (0:00:00.314) 0:00:13.057 ******* 2026-02-15 04:59:14.505693 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:14.505704 | orchestrator | 2026-02-15 04:59:14.505715 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-15 04:59:14.505733 | orchestrator | Sunday 15 February 2026 04:59:11 +0000 (0:00:00.300) 0:00:13.358 ******* 2026-02-15 04:59:14.505744 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:14.505755 | orchestrator | 2026-02-15 04:59:14.505766 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-15 04:59:14.505777 | orchestrator | Sunday 15 February 2026 04:59:11 +0000 (0:00:00.260) 0:00:13.618 ******* 2026-02-15 04:59:14.505788 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:14.505799 | orchestrator | 2026-02-15 04:59:14.505810 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-15 04:59:14.505821 | orchestrator | Sunday 15 February 2026 04:59:13 +0000 (0:00:01.748) 0:00:15.367 ******* 2026-02-15 04:59:14.505832 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:14.505843 | orchestrator | 2026-02-15 04:59:14.505853 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-15 04:59:14.505864 | orchestrator | Sunday 15 February 2026 04:59:13 +0000 (0:00:00.301) 0:00:15.668 ******* 2026-02-15 04:59:14.505875 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:14.505886 | orchestrator | 2026-02-15 04:59:14.505905 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:17.464767 | orchestrator | Sunday 15 February 2026 04:59:14 +0000 (0:00:00.280) 0:00:15.949 ******* 2026-02-15 04:59:17.464891 | orchestrator | 2026-02-15 04:59:17.464913 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:17.464926 | orchestrator | Sunday 15 February 2026 04:59:14 +0000 (0:00:00.071) 0:00:16.021 ******* 2026-02-15 04:59:17.464939 | orchestrator | 2026-02-15 04:59:17.464953 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:17.464967 | orchestrator | Sunday 15 February 2026 04:59:14 +0000 (0:00:00.076) 0:00:16.097 ******* 2026-02-15 04:59:17.464980 | orchestrator | 2026-02-15 04:59:17.464993 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-15 04:59:17.465006 | orchestrator | Sunday 15 February 2026 04:59:14 +0000 (0:00:00.079) 0:00:16.177 ******* 2026-02-15 04:59:17.465019 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:17.465032 | orchestrator | 2026-02-15 04:59:17.465045 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-15 04:59:17.465058 | orchestrator | Sunday 15 February 2026 04:59:16 +0000 (0:00:01.587) 0:00:17.764 ******* 2026-02-15 04:59:17.465071 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-15 04:59:17.465084 | orchestrator |  "msg": [ 2026-02-15 04:59:17.465099 | orchestrator |  "Validator run completed.", 2026-02-15 04:59:17.465113 | orchestrator |  "You can find the report file here:", 2026-02-15 04:59:17.465126 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2026-02-15T04:58:59+00:00-report.json", 2026-02-15 04:59:17.465141 | orchestrator |  "on the following host:", 2026-02-15 04:59:17.465154 | orchestrator |  "testbed-manager" 2026-02-15 04:59:17.465167 | orchestrator |  ] 2026-02-15 04:59:17.465180 | orchestrator | } 2026-02-15 04:59:17.465193 | orchestrator | 2026-02-15 04:59:17.465205 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:59:17.465218 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2026-02-15 04:59:17.465233 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:59:17.465247 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:59:17.465260 | orchestrator | 2026-02-15 04:59:17.465272 | orchestrator | 2026-02-15 04:59:17.465285 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:59:17.465330 | orchestrator | Sunday 15 February 2026 04:59:16 +0000 (0:00:00.907) 0:00:18.672 ******* 2026-02-15 04:59:17.465343 | orchestrator | =============================================================================== 2026-02-15 04:59:17.465357 | orchestrator | Aggregate test results step one ----------------------------------------- 1.75s 2026-02-15 04:59:17.465370 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.59s 2026-02-15 04:59:17.465383 | orchestrator | Write report file ------------------------------------------------------- 1.59s 2026-02-15 04:59:17.465395 | orchestrator | Gather status data ------------------------------------------------------ 1.37s 2026-02-15 04:59:17.465407 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2026-02-15 04:59:17.465420 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2026-02-15 04:59:17.465433 | orchestrator | Print report file information ------------------------------------------- 0.91s 2026-02-15 04:59:17.465484 | orchestrator | Get timestamp for report file ------------------------------------------- 0.82s 2026-02-15 04:59:17.465500 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2026-02-15 04:59:17.465513 | orchestrator | Set quorum test data ---------------------------------------------------- 0.50s 2026-02-15 04:59:17.465527 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.49s 2026-02-15 04:59:17.465540 | orchestrator | Set health test data ---------------------------------------------------- 0.34s 2026-02-15 04:59:17.465553 | orchestrator | Prepare test data ------------------------------------------------------- 0.34s 2026-02-15 04:59:17.465566 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.32s 2026-02-15 04:59:17.465578 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.31s 2026-02-15 04:59:17.465591 | orchestrator | Set test result to failed if container is missing ----------------------- 0.31s 2026-02-15 04:59:17.465604 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.31s 2026-02-15 04:59:17.465617 | orchestrator | Aggregate test results step two ----------------------------------------- 0.30s 2026-02-15 04:59:17.465630 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.30s 2026-02-15 04:59:17.465643 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-02-15 04:59:17.801835 | orchestrator | + osism validate ceph-mgrs 2026-02-15 04:59:49.508708 | orchestrator | 2026-02-15 04:59:49.508826 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2026-02-15 04:59:49.508843 | orchestrator | 2026-02-15 04:59:49.508856 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-15 04:59:49.508868 | orchestrator | Sunday 15 February 2026 04:59:34 +0000 (0:00:00.443) 0:00:00.443 ******* 2026-02-15 04:59:49.508880 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:49.508891 | orchestrator | 2026-02-15 04:59:49.508902 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-15 04:59:49.508913 | orchestrator | Sunday 15 February 2026 04:59:35 +0000 (0:00:00.860) 0:00:01.304 ******* 2026-02-15 04:59:49.508924 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:49.508935 | orchestrator | 2026-02-15 04:59:49.508946 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-15 04:59:49.508957 | orchestrator | Sunday 15 February 2026 04:59:36 +0000 (0:00:01.012) 0:00:02.316 ******* 2026-02-15 04:59:49.508968 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:49.508979 | orchestrator | 2026-02-15 04:59:49.508990 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2026-02-15 04:59:49.509001 | orchestrator | Sunday 15 February 2026 04:59:36 +0000 (0:00:00.126) 0:00:02.443 ******* 2026-02-15 04:59:49.509012 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:49.509023 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:59:49.509033 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:59:49.509044 | orchestrator | 2026-02-15 04:59:49.509055 | orchestrator | TASK [Get container info] ****************************************************** 2026-02-15 04:59:49.509099 | orchestrator | Sunday 15 February 2026 04:59:37 +0000 (0:00:00.295) 0:00:02.739 ******* 2026-02-15 04:59:49.509112 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:59:49.509123 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:59:49.509133 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:49.509144 | orchestrator | 2026-02-15 04:59:49.509155 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2026-02-15 04:59:49.509165 | orchestrator | Sunday 15 February 2026 04:59:38 +0000 (0:00:01.020) 0:00:03.759 ******* 2026-02-15 04:59:49.509176 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:49.509187 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:59:49.509198 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:59:49.509209 | orchestrator | 2026-02-15 04:59:49.509239 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2026-02-15 04:59:49.509252 | orchestrator | Sunday 15 February 2026 04:59:38 +0000 (0:00:00.285) 0:00:04.044 ******* 2026-02-15 04:59:49.509265 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:49.509278 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:59:49.509290 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:59:49.509303 | orchestrator | 2026-02-15 04:59:49.509315 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-15 04:59:49.509327 | orchestrator | Sunday 15 February 2026 04:59:38 +0000 (0:00:00.492) 0:00:04.537 ******* 2026-02-15 04:59:49.509340 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:49.509352 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:59:49.509364 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:59:49.509376 | orchestrator | 2026-02-15 04:59:49.509393 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2026-02-15 04:59:49.509412 | orchestrator | Sunday 15 February 2026 04:59:39 +0000 (0:00:00.310) 0:00:04.848 ******* 2026-02-15 04:59:49.509430 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:49.509448 | orchestrator | skipping: [testbed-node-1] 2026-02-15 04:59:49.509491 | orchestrator | skipping: [testbed-node-2] 2026-02-15 04:59:49.509511 | orchestrator | 2026-02-15 04:59:49.509531 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2026-02-15 04:59:49.509552 | orchestrator | Sunday 15 February 2026 04:59:39 +0000 (0:00:00.284) 0:00:05.132 ******* 2026-02-15 04:59:49.509571 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:49.509589 | orchestrator | ok: [testbed-node-1] 2026-02-15 04:59:49.509607 | orchestrator | ok: [testbed-node-2] 2026-02-15 04:59:49.509625 | orchestrator | 2026-02-15 04:59:49.509636 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-15 04:59:49.509647 | orchestrator | Sunday 15 February 2026 04:59:39 +0000 (0:00:00.505) 0:00:05.638 ******* 2026-02-15 04:59:49.509658 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:49.509669 | orchestrator | 2026-02-15 04:59:49.509680 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-15 04:59:49.509691 | orchestrator | Sunday 15 February 2026 04:59:40 +0000 (0:00:00.266) 0:00:05.904 ******* 2026-02-15 04:59:49.509701 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:49.509712 | orchestrator | 2026-02-15 04:59:49.509723 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-15 04:59:49.509734 | orchestrator | Sunday 15 February 2026 04:59:40 +0000 (0:00:00.310) 0:00:06.214 ******* 2026-02-15 04:59:49.509745 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:49.509756 | orchestrator | 2026-02-15 04:59:49.509774 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:49.509785 | orchestrator | Sunday 15 February 2026 04:59:40 +0000 (0:00:00.277) 0:00:06.493 ******* 2026-02-15 04:59:49.509796 | orchestrator | 2026-02-15 04:59:49.509806 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:49.509817 | orchestrator | Sunday 15 February 2026 04:59:40 +0000 (0:00:00.098) 0:00:06.591 ******* 2026-02-15 04:59:49.509828 | orchestrator | 2026-02-15 04:59:49.509839 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:49.509860 | orchestrator | Sunday 15 February 2026 04:59:40 +0000 (0:00:00.084) 0:00:06.675 ******* 2026-02-15 04:59:49.509871 | orchestrator | 2026-02-15 04:59:49.509881 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-15 04:59:49.509892 | orchestrator | Sunday 15 February 2026 04:59:41 +0000 (0:00:00.076) 0:00:06.752 ******* 2026-02-15 04:59:49.509903 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:49.509914 | orchestrator | 2026-02-15 04:59:49.509924 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2026-02-15 04:59:49.509935 | orchestrator | Sunday 15 February 2026 04:59:41 +0000 (0:00:00.275) 0:00:07.027 ******* 2026-02-15 04:59:49.509946 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:49.509957 | orchestrator | 2026-02-15 04:59:49.509988 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2026-02-15 04:59:49.510000 | orchestrator | Sunday 15 February 2026 04:59:41 +0000 (0:00:00.259) 0:00:07.287 ******* 2026-02-15 04:59:49.510011 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:49.510067 | orchestrator | 2026-02-15 04:59:49.510079 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2026-02-15 04:59:49.510090 | orchestrator | Sunday 15 February 2026 04:59:41 +0000 (0:00:00.128) 0:00:07.415 ******* 2026-02-15 04:59:49.510100 | orchestrator | changed: [testbed-node-0] 2026-02-15 04:59:49.510111 | orchestrator | 2026-02-15 04:59:49.510122 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2026-02-15 04:59:49.510132 | orchestrator | Sunday 15 February 2026 04:59:43 +0000 (0:00:01.990) 0:00:09.406 ******* 2026-02-15 04:59:49.510143 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:49.510154 | orchestrator | 2026-02-15 04:59:49.510165 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2026-02-15 04:59:49.510175 | orchestrator | Sunday 15 February 2026 04:59:44 +0000 (0:00:00.491) 0:00:09.897 ******* 2026-02-15 04:59:49.510186 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:49.510197 | orchestrator | 2026-02-15 04:59:49.510207 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2026-02-15 04:59:49.510218 | orchestrator | Sunday 15 February 2026 04:59:44 +0000 (0:00:00.319) 0:00:10.216 ******* 2026-02-15 04:59:49.510229 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:49.510239 | orchestrator | 2026-02-15 04:59:49.510250 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2026-02-15 04:59:49.510261 | orchestrator | Sunday 15 February 2026 04:59:44 +0000 (0:00:00.137) 0:00:10.354 ******* 2026-02-15 04:59:49.510272 | orchestrator | ok: [testbed-node-0] 2026-02-15 04:59:49.510282 | orchestrator | 2026-02-15 04:59:49.510293 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-15 04:59:49.510304 | orchestrator | Sunday 15 February 2026 04:59:44 +0000 (0:00:00.162) 0:00:10.516 ******* 2026-02-15 04:59:49.510314 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:49.510325 | orchestrator | 2026-02-15 04:59:49.510336 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-15 04:59:49.510346 | orchestrator | Sunday 15 February 2026 04:59:45 +0000 (0:00:00.253) 0:00:10.770 ******* 2026-02-15 04:59:49.510357 | orchestrator | skipping: [testbed-node-0] 2026-02-15 04:59:49.510368 | orchestrator | 2026-02-15 04:59:49.510378 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-15 04:59:49.510389 | orchestrator | Sunday 15 February 2026 04:59:45 +0000 (0:00:00.257) 0:00:11.028 ******* 2026-02-15 04:59:49.510400 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:49.510411 | orchestrator | 2026-02-15 04:59:49.510422 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-15 04:59:49.510432 | orchestrator | Sunday 15 February 2026 04:59:46 +0000 (0:00:01.379) 0:00:12.408 ******* 2026-02-15 04:59:49.510443 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:49.510454 | orchestrator | 2026-02-15 04:59:49.510464 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-15 04:59:49.510522 | orchestrator | Sunday 15 February 2026 04:59:46 +0000 (0:00:00.250) 0:00:12.658 ******* 2026-02-15 04:59:49.510542 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:49.510560 | orchestrator | 2026-02-15 04:59:49.510573 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:49.510584 | orchestrator | Sunday 15 February 2026 04:59:47 +0000 (0:00:00.328) 0:00:12.986 ******* 2026-02-15 04:59:49.510594 | orchestrator | 2026-02-15 04:59:49.510605 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:49.510622 | orchestrator | Sunday 15 February 2026 04:59:47 +0000 (0:00:00.080) 0:00:13.067 ******* 2026-02-15 04:59:49.510640 | orchestrator | 2026-02-15 04:59:49.510657 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 04:59:49.510675 | orchestrator | Sunday 15 February 2026 04:59:47 +0000 (0:00:00.074) 0:00:13.141 ******* 2026-02-15 04:59:49.510693 | orchestrator | 2026-02-15 04:59:49.510712 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-15 04:59:49.510730 | orchestrator | Sunday 15 February 2026 04:59:47 +0000 (0:00:00.253) 0:00:13.395 ******* 2026-02-15 04:59:49.510749 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2026-02-15 04:59:49.510765 | orchestrator | 2026-02-15 04:59:49.510776 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-15 04:59:49.510787 | orchestrator | Sunday 15 February 2026 04:59:49 +0000 (0:00:01.407) 0:00:14.802 ******* 2026-02-15 04:59:49.510797 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2026-02-15 04:59:49.510815 | orchestrator |  "msg": [ 2026-02-15 04:59:49.510828 | orchestrator |  "Validator run completed.", 2026-02-15 04:59:49.510839 | orchestrator |  "You can find the report file here:", 2026-02-15 04:59:49.510849 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2026-02-15T04:59:35+00:00-report.json", 2026-02-15 04:59:49.510861 | orchestrator |  "on the following host:", 2026-02-15 04:59:49.510872 | orchestrator |  "testbed-manager" 2026-02-15 04:59:49.510883 | orchestrator |  ] 2026-02-15 04:59:49.510894 | orchestrator | } 2026-02-15 04:59:49.510906 | orchestrator | 2026-02-15 04:59:49.510916 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 04:59:49.510928 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-15 04:59:49.510940 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:59:49.510963 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 04:59:49.839928 | orchestrator | 2026-02-15 04:59:49.840027 | orchestrator | 2026-02-15 04:59:49.840042 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 04:59:49.840055 | orchestrator | Sunday 15 February 2026 04:59:49 +0000 (0:00:00.413) 0:00:15.215 ******* 2026-02-15 04:59:49.840066 | orchestrator | =============================================================================== 2026-02-15 04:59:49.840077 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.99s 2026-02-15 04:59:49.840088 | orchestrator | Write report file ------------------------------------------------------- 1.41s 2026-02-15 04:59:49.840098 | orchestrator | Aggregate test results step one ----------------------------------------- 1.38s 2026-02-15 04:59:49.840109 | orchestrator | Get container info ------------------------------------------------------ 1.02s 2026-02-15 04:59:49.840119 | orchestrator | Create report output directory ------------------------------------------ 1.01s 2026-02-15 04:59:49.840130 | orchestrator | Get timestamp for report file ------------------------------------------- 0.86s 2026-02-15 04:59:49.840140 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.51s 2026-02-15 04:59:49.840178 | orchestrator | Set test result to passed if container is existing ---------------------- 0.49s 2026-02-15 04:59:49.840189 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.49s 2026-02-15 04:59:49.840199 | orchestrator | Print report file information ------------------------------------------- 0.41s 2026-02-15 04:59:49.840210 | orchestrator | Flush handlers ---------------------------------------------------------- 0.41s 2026-02-15 04:59:49.840220 | orchestrator | Aggregate test results step three --------------------------------------- 0.33s 2026-02-15 04:59:49.840231 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.32s 2026-02-15 04:59:49.840242 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2026-02-15 04:59:49.840252 | orchestrator | Aggregate test results step two ----------------------------------------- 0.31s 2026-02-15 04:59:49.840263 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2026-02-15 04:59:49.840273 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2026-02-15 04:59:49.840284 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2026-02-15 04:59:49.840294 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2026-02-15 04:59:49.840305 | orchestrator | Print report file information ------------------------------------------- 0.28s 2026-02-15 04:59:50.147612 | orchestrator | + osism validate ceph-osds 2026-02-15 05:00:12.810306 | orchestrator | 2026-02-15 05:00:12.810384 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2026-02-15 05:00:12.810401 | orchestrator | 2026-02-15 05:00:12.810414 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2026-02-15 05:00:12.810427 | orchestrator | Sunday 15 February 2026 05:00:07 +0000 (0:00:00.446) 0:00:00.446 ******* 2026-02-15 05:00:12.810440 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 05:00:12.810452 | orchestrator | 2026-02-15 05:00:12.810464 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-02-15 05:00:12.810477 | orchestrator | Sunday 15 February 2026 05:00:08 +0000 (0:00:01.869) 0:00:02.315 ******* 2026-02-15 05:00:12.810508 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 05:00:12.810520 | orchestrator | 2026-02-15 05:00:12.810532 | orchestrator | TASK [Create report output directory] ****************************************** 2026-02-15 05:00:12.810544 | orchestrator | Sunday 15 February 2026 05:00:09 +0000 (0:00:00.547) 0:00:02.863 ******* 2026-02-15 05:00:12.810557 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 05:00:12.810583 | orchestrator | 2026-02-15 05:00:12.810605 | orchestrator | TASK [Define report vars] ****************************************************** 2026-02-15 05:00:12.810619 | orchestrator | Sunday 15 February 2026 05:00:10 +0000 (0:00:00.778) 0:00:03.641 ******* 2026-02-15 05:00:12.810632 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:12.810647 | orchestrator | 2026-02-15 05:00:12.810660 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-15 05:00:12.810672 | orchestrator | Sunday 15 February 2026 05:00:10 +0000 (0:00:00.156) 0:00:03.798 ******* 2026-02-15 05:00:12.810731 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:12.810745 | orchestrator | 2026-02-15 05:00:12.810758 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-15 05:00:12.810772 | orchestrator | Sunday 15 February 2026 05:00:10 +0000 (0:00:00.136) 0:00:03.934 ******* 2026-02-15 05:00:12.810785 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:12.810798 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:00:12.810811 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:00:12.810824 | orchestrator | 2026-02-15 05:00:12.810836 | orchestrator | TASK [Define OSD test variables] *********************************************** 2026-02-15 05:00:12.810849 | orchestrator | Sunday 15 February 2026 05:00:10 +0000 (0:00:00.343) 0:00:04.278 ******* 2026-02-15 05:00:12.810861 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:12.810897 | orchestrator | 2026-02-15 05:00:12.810911 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2026-02-15 05:00:12.810924 | orchestrator | Sunday 15 February 2026 05:00:11 +0000 (0:00:00.150) 0:00:04.428 ******* 2026-02-15 05:00:12.810938 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:12.810950 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:12.810962 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:12.810974 | orchestrator | 2026-02-15 05:00:12.810987 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2026-02-15 05:00:12.810999 | orchestrator | Sunday 15 February 2026 05:00:11 +0000 (0:00:00.336) 0:00:04.764 ******* 2026-02-15 05:00:12.811012 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:12.811025 | orchestrator | 2026-02-15 05:00:12.811038 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-15 05:00:12.811050 | orchestrator | Sunday 15 February 2026 05:00:12 +0000 (0:00:00.851) 0:00:05.615 ******* 2026-02-15 05:00:12.811062 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:12.811072 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:12.811080 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:12.811088 | orchestrator | 2026-02-15 05:00:12.811097 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2026-02-15 05:00:12.811106 | orchestrator | Sunday 15 February 2026 05:00:12 +0000 (0:00:00.318) 0:00:05.934 ******* 2026-02-15 05:00:12.811116 | orchestrator | skipping: [testbed-node-3] => (item={'id': '749b7a90730d7212bae283ea4a8aaee67da95496f2c9cc497eb5a6c8ac6e4a55', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-15 05:00:12.811126 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'aecc8ddf93faa3b538ee96434519988ad73c83a3c21e436fc79d464f490d6308', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-15 05:00:12.811137 | orchestrator | skipping: [testbed-node-3] => (item={'id': '84786a3799399b93cc00a6a03a6b71e506d16788b2f9ea632354cf990ff7a4dd', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-15 05:00:12.811149 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f4111164123f0ed8ff14b50049458b7bb255e70068f8f11f1706f06882e5bf4d', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-15 05:00:12.811162 | orchestrator | skipping: [testbed-node-3] => (item={'id': '37eed1882b3647b18b2ebe71b08b84daca64b19c8a52534c62e53a1844cf49c0', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-15 05:00:12.811194 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6310b682dab35b99f3fa8ba02fe9ce0d483076b7650c51e50c2efdd8804c06cd', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-15 05:00:12.811209 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6b11ca8890966ded7cf4143b23e71726c4854acefc8d2a12f975bfa4ddd82066', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-15 05:00:12.811222 | orchestrator | skipping: [testbed-node-3] => (item={'id': '827ad0ec207f767626a9dc8d2a95c8aede82e210a6a019110e6cf2b08996af40', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-02-15 05:00:12.811235 | orchestrator | skipping: [testbed-node-3] => (item={'id': '79cc533261b95b6ed9b452afe817b96f0e467b702b9e9b87cc0af3260116f29a', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:12.811298 | orchestrator | skipping: [testbed-node-3] => (item={'id': '385556870d29e57c527c485fd5ad970e0a5b5a66ee62fcfe025535b8a711fad0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:12.811316 | orchestrator | skipping: [testbed-node-3] => (item={'id': '84e481acc10954093037a7f8be159e8c832a03fff1fbc59ba5b497100bb8388e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:12.811325 | orchestrator | ok: [testbed-node-3] => (item={'id': '164252bf503b6d6283584e6271a14705d02f1d7f98ff657eaf97b15cb8224323', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-15 05:00:12.811332 | orchestrator | ok: [testbed-node-3] => (item={'id': '4ffc7581d52cc9a6f699f0bb78b2c8722fdc49dc21b5a2e43ed62358a2faba8e', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-15 05:00:12.811340 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e393dcb65ffa7464bb5deed7ae73ddfc168e1c4561dec0602778c1ae1d7f7071', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:12.811347 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5bfbd005b88a58e6728fff4df81dc175ee0447a7dea7bb40aa7d923cc226096c', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-15 05:00:12.811354 | orchestrator | skipping: [testbed-node-3] => (item={'id': '03da5c965fbef1a4beda738ee2b57b40d6f95ae6fb2436984b2637bda0df37a0', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-15 05:00:12.811362 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c89e7e2da4ec0191e0d04d2599476385bde320e020916eb1fbae9aca6e3c155e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-15 05:00:12.811369 | orchestrator | skipping: [testbed-node-3] => (item={'id': '9c98284be2d984f52380f34e964d1f87781ae8757b7776024ba78320b31c24d1', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-15 05:00:12.811377 | orchestrator | skipping: [testbed-node-3] => (item={'id': '84530628b731cf5bff44056239d3400608c835a9b9b39f4096232e5c38599267', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-15 05:00:12.811385 | orchestrator | skipping: [testbed-node-4] => (item={'id': '858d1492a4cf413e684b7d3e05a4147ce445998f47eb807614efa3a84effda3d', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-15 05:00:12.811399 | orchestrator | skipping: [testbed-node-4] => (item={'id': '868b8517c483de6dcedcc75fdcdc3d03d74a9f113946982fc6737356912c57ac', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-15 05:00:13.041539 | orchestrator | skipping: [testbed-node-4] => (item={'id': '022a09e5635b42343b36e97417cd563bd7ebf709a519ab34025c380f93a9fa11', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-15 05:00:13.041639 | orchestrator | skipping: [testbed-node-4] => (item={'id': '2395254000eb11331dfece8ea16bcd2ae2fa67ceae94e7d8c87fc9d537098ea5', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-15 05:00:13.041653 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9e89efd2cbdea0867064a9342f573bb4b6a64af559cf12200fd510cb6a9e52c0', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-15 05:00:13.041664 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e9f0b0f6773d72587c72a34ad165df5064b0bfc52aafb9538daf2d213192c080', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-15 05:00:13.041686 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8706e84e80678db1819d5e2927192758360a385e191dcba9a582dd67cdc0c2db', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-15 05:00:13.041697 | orchestrator | skipping: [testbed-node-4] => (item={'id': '872467fa4ea691033ad360a27f78ce952b8303478d71d395dcdef46a1c4a34d7', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-02-15 05:00:13.041707 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9301cad5b536382bdfebdc225bb4a22faefa5517923ba70eeb227e4183b87e24', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:13.041717 | orchestrator | skipping: [testbed-node-4] => (item={'id': '04d17962114c2a75966dd85b5423bef13e453336ae2622fe48206e1b46636681', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:13.041727 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a5e2257fbe50b6b04b57eea7c15588bd6d811ba7870f8667a4103775029345a3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:13.041738 | orchestrator | ok: [testbed-node-4] => (item={'id': '36a7dd972617990516fc80e89c0e0096d6e2edfc133d96986889bac46b3edf05', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-15 05:00:13.041749 | orchestrator | ok: [testbed-node-4] => (item={'id': '3a58cb4e8cafaa3e151ccf2d3a847f96296e5c9d526009aacdf4692a66120275', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-15 05:00:13.041759 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6d2307f93bb7c78ffc961b4dda9b3af3000deb6e4d3484509092c48f40aca7a6', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:13.041768 | orchestrator | skipping: [testbed-node-4] => (item={'id': '4e908595a0e460851aba6e07bba4e3397de60da045af08edc47263ec243f4624', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-15 05:00:13.041778 | orchestrator | skipping: [testbed-node-4] => (item={'id': '014f2d055eeb7d8937dcb36ee3127db6373dab0d431c04d819154ec0b2ae6045', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-15 05:00:13.041802 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'c4b6c67efdaeaed7e66dd1d54765a4004535a033d49eb4635d40f96d6c51abff', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-15 05:00:13.041820 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'f57dc2c2fc5bf95d8896c7986896cc01cea7343307a03ef0f28474a356707986', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-15 05:00:13.041830 | orchestrator | skipping: [testbed-node-4] => (item={'id': '74034b9851488526821c7edefb32c70e9ae7661d11355b205bbda35e1db03c29', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-15 05:00:13.041840 | orchestrator | skipping: [testbed-node-5] => (item={'id': '76a5a5952ac7de73a98afd46320a7da5ead3cffcd6a2097b91197155cbdaa34e', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:2.2.0.20251130', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 9 minutes'})  2026-02-15 05:00:13.041854 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ed705dcbc5b8ee84d71669698798fc80318d6d19087547203bd2c604fefc39a9', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20251130', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-15 05:00:13.041864 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8cd5a43226690611bbbfb263b1ae814407900d8a9d52216ede6d2f85657d04bd', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20251130', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 10 minutes'})  2026-02-15 05:00:13.041874 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8d88dbf7a6bec71d5f629b5f4ae9bb0897182b6295652a18399c7bee4d9dd74c', 'image': 'registry.osism.tech/kolla/release/ceilometer-compute:23.0.2.20251130', 'name': '/ceilometer_compute', 'state': 'running', 'status': 'Up 20 minutes (unhealthy)'})  2026-02-15 05:00:13.041884 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2459e38c3ea5056e8a6ecbee2e938eab4c6df75ce0b79dda1268e1f95cddba7a', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.2.1.20251130', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 41 minutes (healthy)'})  2026-02-15 05:00:13.041894 | orchestrator | skipping: [testbed-node-5] => (item={'id': '226c4ec5ddc8823a7c767fafe2baaaaad1fd166a91c890c2cbafd88ea2f52dda', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:10.0.0.20251130', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-15 05:00:13.041904 | orchestrator | skipping: [testbed-node-5] => (item={'id': '2766f70706be1aae600e20bc6eda6377974e64f59552281a855d44095be7356c', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.2.1.20251130', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 42 minutes (healthy)'})  2026-02-15 05:00:13.041913 | orchestrator | skipping: [testbed-node-5] => (item={'id': '9dca83f94fa3c94a29c369155bb1a983dcb114fbd645de4106cb33bcc9f8c425', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.2.20251130', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 49 minutes (healthy)'})  2026-02-15 05:00:13.041923 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7ddb2609e1086e8a4092397d5ef526a591315569f562219adde805323f4e5e96', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:13.041933 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'b39cae37dd42877e8f7c08b54a68cc0e22cb61000b2f57bbd8eed655d5cfe191', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:13.041943 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ac117d72dfd330026301daabee85e74a3d14fb48b04e66a4ca9b5446422da6b5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:13.041961 | orchestrator | ok: [testbed-node-5] => (item={'id': 'a5fd993ccdc08a00a020eaeaddbdf91bd44f154865d109032ed00dae579129e9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-15 05:00:13.041977 | orchestrator | ok: [testbed-node-5] => (item={'id': '09a365119ab1332176a65fb3860ae6d0d6e5c118bf444f750b3399571de12e6f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up About an hour'}) 2026-02-15 05:00:24.491762 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ed18d3c1dc8c12dd06dae02a9888b46b93517b5a2b3c112b132c910c5c72c320', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.3.20251130', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up About an hour'})  2026-02-15 05:00:24.491938 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'bda5342f8066ed589bab2a780e5c346dbeee3eee2390ac8b72b3f9fec3cdd3cd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.3.20251130', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-15 05:00:24.491958 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c66b088ad94197d46e7c006c214667d093c93f02f3d09faf3a737f43af71eee5', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.3.20251130', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up About an hour (healthy)'})  2026-02-15 05:00:24.491973 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd0fa1ae8919e64dc1b87c65b38d536a60603267504c275d60e9bbd9e8fdcd171', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251130', 'name': '/cron', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-15 05:00:24.491987 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3a6e69186862ac12930194ab650f7c01fcb8ae1572380e11c04603f711481329', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.7.1.20251130', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-15 05:00:24.492045 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7a2543c4bebc530152c5bba9cd3785e044592ce48d76e8338861b3808af28f0a', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251130', 'name': '/fluentd', 'state': 'running', 'status': 'Up 2 hours'})  2026-02-15 05:00:24.492060 | orchestrator | 2026-02-15 05:00:24.492074 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2026-02-15 05:00:24.492088 | orchestrator | Sunday 15 February 2026 05:00:13 +0000 (0:00:00.476) 0:00:06.410 ******* 2026-02-15 05:00:24.492099 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.492112 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:24.492123 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:24.492134 | orchestrator | 2026-02-15 05:00:24.492146 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2026-02-15 05:00:24.492157 | orchestrator | Sunday 15 February 2026 05:00:13 +0000 (0:00:00.298) 0:00:06.708 ******* 2026-02-15 05:00:24.492168 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:24.492180 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:00:24.492191 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:00:24.492202 | orchestrator | 2026-02-15 05:00:24.492213 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2026-02-15 05:00:24.492224 | orchestrator | Sunday 15 February 2026 05:00:13 +0000 (0:00:00.398) 0:00:07.107 ******* 2026-02-15 05:00:24.492236 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.492249 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:24.492262 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:24.492275 | orchestrator | 2026-02-15 05:00:24.492287 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-15 05:00:24.492300 | orchestrator | Sunday 15 February 2026 05:00:14 +0000 (0:00:00.315) 0:00:07.423 ******* 2026-02-15 05:00:24.492313 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.492355 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:24.492368 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:24.492381 | orchestrator | 2026-02-15 05:00:24.492394 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2026-02-15 05:00:24.492407 | orchestrator | Sunday 15 February 2026 05:00:14 +0000 (0:00:00.288) 0:00:07.712 ******* 2026-02-15 05:00:24.492420 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2026-02-15 05:00:24.492434 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2026-02-15 05:00:24.492447 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:24.492460 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2026-02-15 05:00:24.492501 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2026-02-15 05:00:24.492513 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:00:24.492526 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2026-02-15 05:00:24.492539 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2026-02-15 05:00:24.492552 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:00:24.492565 | orchestrator | 2026-02-15 05:00:24.492578 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2026-02-15 05:00:24.492591 | orchestrator | Sunday 15 February 2026 05:00:14 +0000 (0:00:00.350) 0:00:08.062 ******* 2026-02-15 05:00:24.492602 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.492613 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:24.492624 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:24.492635 | orchestrator | 2026-02-15 05:00:24.492647 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-15 05:00:24.492658 | orchestrator | Sunday 15 February 2026 05:00:15 +0000 (0:00:00.434) 0:00:08.497 ******* 2026-02-15 05:00:24.492669 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:24.492701 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:00:24.492714 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:00:24.492724 | orchestrator | 2026-02-15 05:00:24.492759 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2026-02-15 05:00:24.492771 | orchestrator | Sunday 15 February 2026 05:00:15 +0000 (0:00:00.287) 0:00:08.785 ******* 2026-02-15 05:00:24.492782 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:24.492793 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:00:24.492805 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:00:24.492816 | orchestrator | 2026-02-15 05:00:24.492827 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2026-02-15 05:00:24.492838 | orchestrator | Sunday 15 February 2026 05:00:15 +0000 (0:00:00.278) 0:00:09.063 ******* 2026-02-15 05:00:24.492849 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.492859 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:24.492870 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:24.492881 | orchestrator | 2026-02-15 05:00:24.492892 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-15 05:00:24.492903 | orchestrator | Sunday 15 February 2026 05:00:16 +0000 (0:00:00.345) 0:00:09.408 ******* 2026-02-15 05:00:24.492914 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:24.492925 | orchestrator | 2026-02-15 05:00:24.492936 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-15 05:00:24.492947 | orchestrator | Sunday 15 February 2026 05:00:16 +0000 (0:00:00.761) 0:00:10.170 ******* 2026-02-15 05:00:24.492957 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:24.492968 | orchestrator | 2026-02-15 05:00:24.492984 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-15 05:00:24.492996 | orchestrator | Sunday 15 February 2026 05:00:17 +0000 (0:00:00.256) 0:00:10.427 ******* 2026-02-15 05:00:24.493007 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:24.493032 | orchestrator | 2026-02-15 05:00:24.493043 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 05:00:24.493054 | orchestrator | Sunday 15 February 2026 05:00:17 +0000 (0:00:00.280) 0:00:10.708 ******* 2026-02-15 05:00:24.493065 | orchestrator | 2026-02-15 05:00:24.493075 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 05:00:24.493086 | orchestrator | Sunday 15 February 2026 05:00:17 +0000 (0:00:00.072) 0:00:10.780 ******* 2026-02-15 05:00:24.493097 | orchestrator | 2026-02-15 05:00:24.493108 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 05:00:24.493119 | orchestrator | Sunday 15 February 2026 05:00:17 +0000 (0:00:00.070) 0:00:10.850 ******* 2026-02-15 05:00:24.493130 | orchestrator | 2026-02-15 05:00:24.493140 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-15 05:00:24.493151 | orchestrator | Sunday 15 February 2026 05:00:17 +0000 (0:00:00.075) 0:00:10.925 ******* 2026-02-15 05:00:24.493162 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:24.493173 | orchestrator | 2026-02-15 05:00:24.493184 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2026-02-15 05:00:24.493194 | orchestrator | Sunday 15 February 2026 05:00:17 +0000 (0:00:00.260) 0:00:11.186 ******* 2026-02-15 05:00:24.493205 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:24.493216 | orchestrator | 2026-02-15 05:00:24.493227 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-15 05:00:24.493238 | orchestrator | Sunday 15 February 2026 05:00:18 +0000 (0:00:00.267) 0:00:11.453 ******* 2026-02-15 05:00:24.493249 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.493260 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:24.493271 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:24.493281 | orchestrator | 2026-02-15 05:00:24.493292 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2026-02-15 05:00:24.493304 | orchestrator | Sunday 15 February 2026 05:00:18 +0000 (0:00:00.287) 0:00:11.740 ******* 2026-02-15 05:00:24.493314 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.493325 | orchestrator | 2026-02-15 05:00:24.493336 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2026-02-15 05:00:24.493347 | orchestrator | Sunday 15 February 2026 05:00:19 +0000 (0:00:00.774) 0:00:12.514 ******* 2026-02-15 05:00:24.493358 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 05:00:24.493369 | orchestrator | 2026-02-15 05:00:24.493380 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2026-02-15 05:00:24.493391 | orchestrator | Sunday 15 February 2026 05:00:20 +0000 (0:00:01.593) 0:00:14.108 ******* 2026-02-15 05:00:24.493401 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.493412 | orchestrator | 2026-02-15 05:00:24.493423 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2026-02-15 05:00:24.493434 | orchestrator | Sunday 15 February 2026 05:00:20 +0000 (0:00:00.162) 0:00:14.270 ******* 2026-02-15 05:00:24.493445 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.493456 | orchestrator | 2026-02-15 05:00:24.493495 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2026-02-15 05:00:24.493507 | orchestrator | Sunday 15 February 2026 05:00:21 +0000 (0:00:00.330) 0:00:14.600 ******* 2026-02-15 05:00:24.493518 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:24.493529 | orchestrator | 2026-02-15 05:00:24.493540 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2026-02-15 05:00:24.493551 | orchestrator | Sunday 15 February 2026 05:00:21 +0000 (0:00:00.131) 0:00:14.732 ******* 2026-02-15 05:00:24.493562 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.493573 | orchestrator | 2026-02-15 05:00:24.493584 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-15 05:00:24.493595 | orchestrator | Sunday 15 February 2026 05:00:21 +0000 (0:00:00.184) 0:00:14.917 ******* 2026-02-15 05:00:24.493606 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:24.493616 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:24.493636 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:24.493647 | orchestrator | 2026-02-15 05:00:24.493658 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2026-02-15 05:00:24.493669 | orchestrator | Sunday 15 February 2026 05:00:21 +0000 (0:00:00.308) 0:00:15.225 ******* 2026-02-15 05:00:24.493680 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:00:24.493691 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:00:24.493702 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:00:35.142885 | orchestrator | 2026-02-15 05:00:35.143018 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2026-02-15 05:00:35.143034 | orchestrator | Sunday 15 February 2026 05:00:24 +0000 (0:00:02.627) 0:00:17.853 ******* 2026-02-15 05:00:35.143044 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:35.143054 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:35.143063 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:35.143072 | orchestrator | 2026-02-15 05:00:35.143081 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2026-02-15 05:00:35.143090 | orchestrator | Sunday 15 February 2026 05:00:24 +0000 (0:00:00.380) 0:00:18.233 ******* 2026-02-15 05:00:35.143099 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:35.143108 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:35.143117 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:35.143126 | orchestrator | 2026-02-15 05:00:35.143134 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2026-02-15 05:00:35.143143 | orchestrator | Sunday 15 February 2026 05:00:25 +0000 (0:00:00.571) 0:00:18.805 ******* 2026-02-15 05:00:35.143153 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:35.143162 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:00:35.143171 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:00:35.143180 | orchestrator | 2026-02-15 05:00:35.143188 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2026-02-15 05:00:35.143197 | orchestrator | Sunday 15 February 2026 05:00:25 +0000 (0:00:00.340) 0:00:19.146 ******* 2026-02-15 05:00:35.143206 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:35.143215 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:35.143241 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:35.143250 | orchestrator | 2026-02-15 05:00:35.143259 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2026-02-15 05:00:35.143268 | orchestrator | Sunday 15 February 2026 05:00:26 +0000 (0:00:00.599) 0:00:19.746 ******* 2026-02-15 05:00:35.143277 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:35.143285 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:00:35.143294 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:00:35.143303 | orchestrator | 2026-02-15 05:00:35.143312 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2026-02-15 05:00:35.143321 | orchestrator | Sunday 15 February 2026 05:00:26 +0000 (0:00:00.327) 0:00:20.073 ******* 2026-02-15 05:00:35.143330 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:35.143339 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:00:35.143347 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:00:35.143356 | orchestrator | 2026-02-15 05:00:35.143365 | orchestrator | TASK [Prepare test data] ******************************************************* 2026-02-15 05:00:35.143374 | orchestrator | Sunday 15 February 2026 05:00:26 +0000 (0:00:00.301) 0:00:20.374 ******* 2026-02-15 05:00:35.143383 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:35.143391 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:35.143401 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:35.143442 | orchestrator | 2026-02-15 05:00:35.143454 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2026-02-15 05:00:35.143465 | orchestrator | Sunday 15 February 2026 05:00:27 +0000 (0:00:00.503) 0:00:20.878 ******* 2026-02-15 05:00:35.143475 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:35.143484 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:35.143494 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:35.143508 | orchestrator | 2026-02-15 05:00:35.143551 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2026-02-15 05:00:35.143568 | orchestrator | Sunday 15 February 2026 05:00:28 +0000 (0:00:00.771) 0:00:21.650 ******* 2026-02-15 05:00:35.143583 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:35.143597 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:35.143612 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:35.143627 | orchestrator | 2026-02-15 05:00:35.143644 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2026-02-15 05:00:35.143659 | orchestrator | Sunday 15 February 2026 05:00:28 +0000 (0:00:00.339) 0:00:21.990 ******* 2026-02-15 05:00:35.143675 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:35.143690 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:00:35.143705 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:00:35.143721 | orchestrator | 2026-02-15 05:00:35.143736 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2026-02-15 05:00:35.143751 | orchestrator | Sunday 15 February 2026 05:00:28 +0000 (0:00:00.355) 0:00:22.345 ******* 2026-02-15 05:00:35.143767 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:00:35.143781 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:00:35.143795 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:00:35.143810 | orchestrator | 2026-02-15 05:00:35.143826 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2026-02-15 05:00:35.143840 | orchestrator | Sunday 15 February 2026 05:00:29 +0000 (0:00:00.544) 0:00:22.890 ******* 2026-02-15 05:00:35.143855 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 05:00:35.143865 | orchestrator | 2026-02-15 05:00:35.143874 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2026-02-15 05:00:35.143882 | orchestrator | Sunday 15 February 2026 05:00:29 +0000 (0:00:00.269) 0:00:23.159 ******* 2026-02-15 05:00:35.143891 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:00:35.143900 | orchestrator | 2026-02-15 05:00:35.143908 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2026-02-15 05:00:35.143917 | orchestrator | Sunday 15 February 2026 05:00:30 +0000 (0:00:00.265) 0:00:23.424 ******* 2026-02-15 05:00:35.143926 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 05:00:35.143934 | orchestrator | 2026-02-15 05:00:35.143943 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2026-02-15 05:00:35.143952 | orchestrator | Sunday 15 February 2026 05:00:31 +0000 (0:00:01.793) 0:00:25.218 ******* 2026-02-15 05:00:35.143961 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 05:00:35.143970 | orchestrator | 2026-02-15 05:00:35.143979 | orchestrator | TASK [Aggregate test results step three] *************************************** 2026-02-15 05:00:35.143988 | orchestrator | Sunday 15 February 2026 05:00:32 +0000 (0:00:00.285) 0:00:25.503 ******* 2026-02-15 05:00:35.143997 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 05:00:35.144005 | orchestrator | 2026-02-15 05:00:35.144034 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 05:00:35.144044 | orchestrator | Sunday 15 February 2026 05:00:32 +0000 (0:00:00.276) 0:00:25.779 ******* 2026-02-15 05:00:35.144052 | orchestrator | 2026-02-15 05:00:35.144061 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 05:00:35.144070 | orchestrator | Sunday 15 February 2026 05:00:32 +0000 (0:00:00.070) 0:00:25.849 ******* 2026-02-15 05:00:35.144079 | orchestrator | 2026-02-15 05:00:35.144088 | orchestrator | TASK [Flush handlers] ********************************************************** 2026-02-15 05:00:35.144096 | orchestrator | Sunday 15 February 2026 05:00:32 +0000 (0:00:00.068) 0:00:25.918 ******* 2026-02-15 05:00:35.144105 | orchestrator | 2026-02-15 05:00:35.144114 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2026-02-15 05:00:35.144123 | orchestrator | Sunday 15 February 2026 05:00:32 +0000 (0:00:00.072) 0:00:25.991 ******* 2026-02-15 05:00:35.144131 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-02-15 05:00:35.144152 | orchestrator | 2026-02-15 05:00:35.144161 | orchestrator | TASK [Print report file information] ******************************************* 2026-02-15 05:00:35.144170 | orchestrator | Sunday 15 February 2026 05:00:34 +0000 (0:00:01.593) 0:00:27.584 ******* 2026-02-15 05:00:35.144178 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2026-02-15 05:00:35.144187 | orchestrator |  "msg": [ 2026-02-15 05:00:35.144197 | orchestrator |  "Validator run completed.", 2026-02-15 05:00:35.144206 | orchestrator |  "You can find the report file here:", 2026-02-15 05:00:35.144222 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2026-02-15T05:00:07+00:00-report.json", 2026-02-15 05:00:35.144232 | orchestrator |  "on the following host:", 2026-02-15 05:00:35.144241 | orchestrator |  "testbed-manager" 2026-02-15 05:00:35.144251 | orchestrator |  ] 2026-02-15 05:00:35.144260 | orchestrator | } 2026-02-15 05:00:35.144269 | orchestrator | 2026-02-15 05:00:35.144278 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:00:35.144288 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-15 05:00:35.144298 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-15 05:00:35.144307 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-02-15 05:00:35.144316 | orchestrator | 2026-02-15 05:00:35.144325 | orchestrator | 2026-02-15 05:00:35.144334 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:00:35.144343 | orchestrator | Sunday 15 February 2026 05:00:34 +0000 (0:00:00.609) 0:00:28.194 ******* 2026-02-15 05:00:35.144352 | orchestrator | =============================================================================== 2026-02-15 05:00:35.144360 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.63s 2026-02-15 05:00:35.144369 | orchestrator | Get timestamp for report file ------------------------------------------- 1.87s 2026-02-15 05:00:35.144378 | orchestrator | Aggregate test results step one ----------------------------------------- 1.79s 2026-02-15 05:00:35.144387 | orchestrator | Write report file ------------------------------------------------------- 1.59s 2026-02-15 05:00:35.144395 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.59s 2026-02-15 05:00:35.144404 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.85s 2026-02-15 05:00:35.144436 | orchestrator | Create report output directory ------------------------------------------ 0.78s 2026-02-15 05:00:35.144446 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.77s 2026-02-15 05:00:35.144454 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.77s 2026-02-15 05:00:35.144463 | orchestrator | Aggregate test results step one ----------------------------------------- 0.76s 2026-02-15 05:00:35.144472 | orchestrator | Print report file information ------------------------------------------- 0.61s 2026-02-15 05:00:35.144480 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.60s 2026-02-15 05:00:35.144489 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.57s 2026-02-15 05:00:35.144498 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.55s 2026-02-15 05:00:35.144506 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.54s 2026-02-15 05:00:35.144515 | orchestrator | Prepare test data ------------------------------------------------------- 0.50s 2026-02-15 05:00:35.144524 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.48s 2026-02-15 05:00:35.144532 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.43s 2026-02-15 05:00:35.144541 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.40s 2026-02-15 05:00:35.144557 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.38s 2026-02-15 05:00:35.470213 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2026-02-15 05:00:35.480115 | orchestrator | + set -e 2026-02-15 05:00:35.480345 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 05:00:35.480372 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 05:00:35.480392 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 05:00:35.480491 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 05:00:35.480514 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 05:00:35.480533 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 05:00:35.480552 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 05:00:35.480570 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 05:00:35.480589 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 05:00:35.480607 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 05:00:35.480624 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 05:00:35.480644 | orchestrator | ++ export ARA=false 2026-02-15 05:00:35.480663 | orchestrator | ++ ARA=false 2026-02-15 05:00:35.480680 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 05:00:35.480699 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 05:00:35.480716 | orchestrator | ++ export TEMPEST=false 2026-02-15 05:00:35.480732 | orchestrator | ++ TEMPEST=false 2026-02-15 05:00:35.480748 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 05:00:35.480764 | orchestrator | ++ IS_ZUUL=true 2026-02-15 05:00:35.480780 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 05:00:35.480797 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 05:00:35.480815 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 05:00:35.480834 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 05:00:35.480853 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 05:00:35.480872 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 05:00:35.480891 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 05:00:35.480910 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 05:00:35.480929 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 05:00:35.480949 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 05:00:35.480968 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-02-15 05:00:35.480987 | orchestrator | + source /etc/os-release 2026-02-15 05:00:35.481004 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.4 LTS' 2026-02-15 05:00:35.481023 | orchestrator | ++ NAME=Ubuntu 2026-02-15 05:00:35.481041 | orchestrator | ++ VERSION_ID=24.04 2026-02-15 05:00:35.481061 | orchestrator | ++ VERSION='24.04.4 LTS (Noble Numbat)' 2026-02-15 05:00:35.481082 | orchestrator | ++ VERSION_CODENAME=noble 2026-02-15 05:00:35.481102 | orchestrator | ++ ID=ubuntu 2026-02-15 05:00:35.481124 | orchestrator | ++ ID_LIKE=debian 2026-02-15 05:00:35.481145 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2026-02-15 05:00:35.481165 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2026-02-15 05:00:35.481186 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2026-02-15 05:00:35.481207 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2026-02-15 05:00:35.481228 | orchestrator | ++ UBUNTU_CODENAME=noble 2026-02-15 05:00:35.481250 | orchestrator | ++ LOGO=ubuntu-logo 2026-02-15 05:00:35.481269 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2026-02-15 05:00:35.481289 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2026-02-15 05:00:35.481350 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-15 05:00:35.507603 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2026-02-15 05:00:58.289785 | orchestrator | 2026-02-15 05:00:58.289915 | orchestrator | # Status of Elasticsearch 2026-02-15 05:00:58.289933 | orchestrator | 2026-02-15 05:00:58.289945 | orchestrator | + pushd /opt/configuration/contrib 2026-02-15 05:00:58.289958 | orchestrator | + echo 2026-02-15 05:00:58.289970 | orchestrator | + echo '# Status of Elasticsearch' 2026-02-15 05:00:58.289981 | orchestrator | + echo 2026-02-15 05:00:58.289992 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2026-02-15 05:00:58.503133 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2026-02-15 05:00:58.504056 | orchestrator | 2026-02-15 05:00:58.504147 | orchestrator | # Status of MariaDB 2026-02-15 05:00:58.504347 | orchestrator | + echo 2026-02-15 05:00:58.504374 | orchestrator | + echo '# Status of MariaDB' 2026-02-15 05:00:58.504394 | orchestrator | + echo 2026-02-15 05:00:58.504426 | orchestrator | 2026-02-15 05:00:58.505322 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-15 05:00:58.564815 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-15 05:00:58.564918 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-15 05:00:58.564938 | orchestrator | + MARIADB_USER=root_shard_0 2026-02-15 05:00:58.564953 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2026-02-15 05:00:58.620712 | orchestrator | Reading package lists... 2026-02-15 05:00:58.960738 | orchestrator | Building dependency tree... 2026-02-15 05:00:58.961643 | orchestrator | Reading state information... 2026-02-15 05:00:59.386950 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2026-02-15 05:00:59.387049 | orchestrator | bc set to manually installed. 2026-02-15 05:00:59.387066 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2026-02-15 05:01:00.063694 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2026-02-15 05:01:00.064063 | orchestrator | + echo 2026-02-15 05:01:00.064607 | orchestrator | 2026-02-15 05:01:00.064654 | orchestrator | # Status of Prometheus 2026-02-15 05:01:00.064673 | orchestrator | 2026-02-15 05:01:00.064689 | orchestrator | + echo '# Status of Prometheus' 2026-02-15 05:01:00.064706 | orchestrator | + echo 2026-02-15 05:01:00.064723 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2026-02-15 05:01:00.131361 | orchestrator | Unauthorized 2026-02-15 05:01:00.134596 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2026-02-15 05:01:00.216766 | orchestrator | Unauthorized 2026-02-15 05:01:00.220477 | orchestrator | 2026-02-15 05:01:00.220549 | orchestrator | # Status of RabbitMQ 2026-02-15 05:01:00.220572 | orchestrator | 2026-02-15 05:01:00.220592 | orchestrator | + echo 2026-02-15 05:01:00.220604 | orchestrator | + echo '# Status of RabbitMQ' 2026-02-15 05:01:00.220615 | orchestrator | + echo 2026-02-15 05:01:00.221249 | orchestrator | ++ semver 9.5.0 10.0.0-0 2026-02-15 05:01:00.281943 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-15 05:01:00.282085 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-15 05:01:00.282105 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2026-02-15 05:01:00.760368 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2026-02-15 05:01:00.770637 | orchestrator | 2026-02-15 05:01:00.770722 | orchestrator | # Status of Redis 2026-02-15 05:01:00.770737 | orchestrator | 2026-02-15 05:01:00.770750 | orchestrator | + echo 2026-02-15 05:01:00.770761 | orchestrator | + echo '# Status of Redis' 2026-02-15 05:01:00.770773 | orchestrator | + echo 2026-02-15 05:01:00.770785 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2026-02-15 05:01:00.775692 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001945s;;;0.000000;10.000000 2026-02-15 05:01:00.776031 | orchestrator | 2026-02-15 05:01:00.776059 | orchestrator | # Create backup of MariaDB database 2026-02-15 05:01:00.776073 | orchestrator | + popd 2026-02-15 05:01:00.776085 | orchestrator | + echo 2026-02-15 05:01:00.776097 | orchestrator | + echo '# Create backup of MariaDB database' 2026-02-15 05:01:00.776109 | orchestrator | + echo 2026-02-15 05:01:00.776120 | orchestrator | 2026-02-15 05:01:00.776137 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2026-02-15 05:01:02.811642 | orchestrator | 2026-02-15 05:01:02 | INFO  | Task 24d33022-a0f2-4358-81ad-1a36fa2775bd (mariadb_backup) was prepared for execution. 2026-02-15 05:01:02.811743 | orchestrator | 2026-02-15 05:01:02 | INFO  | It takes a moment until task 24d33022-a0f2-4358-81ad-1a36fa2775bd (mariadb_backup) has been started and output is visible here. 2026-02-15 05:01:31.997087 | orchestrator | 2026-02-15 05:01:31.997242 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 05:01:31.997261 | orchestrator | 2026-02-15 05:01:31.997273 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 05:01:31.997285 | orchestrator | Sunday 15 February 2026 05:01:07 +0000 (0:00:00.187) 0:00:00.187 ******* 2026-02-15 05:01:31.997296 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:01:31.997308 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:01:31.997342 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:01:31.997354 | orchestrator | 2026-02-15 05:01:31.997365 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 05:01:31.997376 | orchestrator | Sunday 15 February 2026 05:01:07 +0000 (0:00:00.348) 0:00:00.536 ******* 2026-02-15 05:01:31.997387 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-15 05:01:31.997399 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-15 05:01:31.997410 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-15 05:01:31.997421 | orchestrator | 2026-02-15 05:01:31.997432 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-15 05:01:31.997443 | orchestrator | 2026-02-15 05:01:31.997454 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-15 05:01:31.997465 | orchestrator | Sunday 15 February 2026 05:01:08 +0000 (0:00:00.594) 0:00:01.131 ******* 2026-02-15 05:01:31.997476 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:01:31.997487 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-15 05:01:31.997498 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-15 05:01:31.997509 | orchestrator | 2026-02-15 05:01:31.997520 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-15 05:01:31.997531 | orchestrator | Sunday 15 February 2026 05:01:08 +0000 (0:00:00.397) 0:00:01.528 ******* 2026-02-15 05:01:31.997543 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:01:31.997556 | orchestrator | 2026-02-15 05:01:31.997567 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2026-02-15 05:01:31.997595 | orchestrator | Sunday 15 February 2026 05:01:09 +0000 (0:00:00.557) 0:00:02.086 ******* 2026-02-15 05:01:31.997607 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:01:31.997620 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:01:31.997634 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:01:31.997647 | orchestrator | 2026-02-15 05:01:31.997659 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2026-02-15 05:01:31.997672 | orchestrator | Sunday 15 February 2026 05:01:12 +0000 (0:00:03.349) 0:00:05.435 ******* 2026-02-15 05:01:31.997685 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-02-15 05:01:31.997697 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-15 05:01:31.997711 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-15 05:01:31.997724 | orchestrator | mariadb_bootstrap_restart 2026-02-15 05:01:31.997738 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:01:31.997750 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:01:31.997762 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:01:31.997775 | orchestrator | 2026-02-15 05:01:31.997788 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-15 05:01:31.997801 | orchestrator | skipping: no hosts matched 2026-02-15 05:01:31.997813 | orchestrator | 2026-02-15 05:01:31.997826 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-15 05:01:31.997838 | orchestrator | skipping: no hosts matched 2026-02-15 05:01:31.997852 | orchestrator | 2026-02-15 05:01:31.997865 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-15 05:01:31.997877 | orchestrator | skipping: no hosts matched 2026-02-15 05:01:31.997888 | orchestrator | 2026-02-15 05:01:31.997899 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-15 05:01:31.997910 | orchestrator | 2026-02-15 05:01:31.997920 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-15 05:01:31.997931 | orchestrator | Sunday 15 February 2026 05:01:30 +0000 (0:00:18.514) 0:00:23.950 ******* 2026-02-15 05:01:31.997942 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:01:31.997953 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:01:31.997964 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:01:31.997982 | orchestrator | 2026-02-15 05:01:31.997993 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-15 05:01:31.998004 | orchestrator | Sunday 15 February 2026 05:01:31 +0000 (0:00:00.324) 0:00:24.274 ******* 2026-02-15 05:01:31.998015 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:01:31.998128 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:01:31.998177 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:01:31.998197 | orchestrator | 2026-02-15 05:01:31.998215 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:01:31.998234 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 05:01:31.998248 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 05:01:31.998259 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 05:01:31.998270 | orchestrator | 2026-02-15 05:01:31.998281 | orchestrator | 2026-02-15 05:01:31.998292 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:01:31.998303 | orchestrator | Sunday 15 February 2026 05:01:31 +0000 (0:00:00.398) 0:00:24.672 ******* 2026-02-15 05:01:31.998313 | orchestrator | =============================================================================== 2026-02-15 05:01:31.998324 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 18.51s 2026-02-15 05:01:31.998356 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.35s 2026-02-15 05:01:31.998368 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.59s 2026-02-15 05:01:31.998380 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.56s 2026-02-15 05:01:31.998391 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.40s 2026-02-15 05:01:31.998402 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.40s 2026-02-15 05:01:31.998413 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.35s 2026-02-15 05:01:31.998424 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2026-02-15 05:01:32.316592 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2026-02-15 05:01:32.323959 | orchestrator | + set -e 2026-02-15 05:01:32.324288 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 05:01:32.325570 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 05:01:32.326168 | orchestrator | ++ INTERACTIVE=false 2026-02-15 05:01:32.326202 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 05:01:32.326214 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 05:01:32.326226 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-15 05:01:32.327891 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-15 05:01:32.336434 | orchestrator | 2026-02-15 05:01:32.336489 | orchestrator | # OpenStack endpoints 2026-02-15 05:01:32.336503 | orchestrator | 2026-02-15 05:01:32.336514 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 05:01:32.336525 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 05:01:32.336536 | orchestrator | + export OS_CLOUD=admin 2026-02-15 05:01:32.336547 | orchestrator | + OS_CLOUD=admin 2026-02-15 05:01:32.336558 | orchestrator | + echo 2026-02-15 05:01:32.336569 | orchestrator | + echo '# OpenStack endpoints' 2026-02-15 05:01:32.336580 | orchestrator | + echo 2026-02-15 05:01:32.336591 | orchestrator | + openstack endpoint list 2026-02-15 05:01:35.490938 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-15 05:01:35.491063 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2026-02-15 05:01:35.491079 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-15 05:01:35.491111 | orchestrator | | 1d34e5f6d0374ac9afaad7ae9e1e9c0f | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2026-02-15 05:01:35.491157 | orchestrator | | 2ef737da888144bbbb34fdba86e1e796 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2026-02-15 05:01:35.491168 | orchestrator | | 2fe0936852c04a37ad486e6f15b1c323 | RegionOne | manilav2 | sharev2 | True | internal | https://api-int.testbed.osism.xyz:8786/v2 | 2026-02-15 05:01:35.491179 | orchestrator | | 3d3128a1f5944c828804953331811dc7 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2026-02-15 05:01:35.491191 | orchestrator | | 422166a55a33498aaf9a043a5ece78bd | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2026-02-15 05:01:35.491202 | orchestrator | | 43a06f270a2b43da9e8c9fea0ca67785 | RegionOne | skyline | panel | True | public | https://api.testbed.osism.xyz:9998 | 2026-02-15 05:01:35.491213 | orchestrator | | 466c44fa002e426bb201bc35d9d143b1 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2026-02-15 05:01:35.491224 | orchestrator | | 48eb8c8be6f648789e8584b8ae5785ad | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2026-02-15 05:01:35.491234 | orchestrator | | 5b2dde1e8933453e82012abd06cc54bb | RegionOne | aodh | alarming | True | public | https://api.testbed.osism.xyz:8042 | 2026-02-15 05:01:35.491245 | orchestrator | | 5cfeb0269e1443ffa087937a3eb6f9fa | RegionOne | manilav2 | sharev2 | True | public | https://api.testbed.osism.xyz:8786/v2 | 2026-02-15 05:01:35.491256 | orchestrator | | 65315c452b7b44c188d528eed3dfa2e7 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2026-02-15 05:01:35.491267 | orchestrator | | 6586dc9647b7406f9317e16c657717dc | RegionOne | manila | share | True | internal | https://api-int.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-15 05:01:35.491278 | orchestrator | | 803e7251de834ac9b82f786a9285efdb | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2026-02-15 05:01:35.491289 | orchestrator | | 86b72b53b2ef4fac93e388c0c128e4fb | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2026-02-15 05:01:35.491300 | orchestrator | | 8e2443c788814e48a28301d60ca2eb28 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-15 05:01:35.491311 | orchestrator | | 96bb355d556d4aa7b4f87a04e021bcf6 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2026-02-15 05:01:35.491321 | orchestrator | | 97d1843d3a4947b6b39eb3d8df2e3bcc | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2026-02-15 05:01:35.491332 | orchestrator | | 9abd6e11ec2047d3b48d42161fed7ecc | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2026-02-15 05:01:35.491343 | orchestrator | | a2a103b5d7004030afc52cb69e27c9b3 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2026-02-15 05:01:35.491354 | orchestrator | | b294733e0a8d4f27ba7eb222fda7ea6e | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2026-02-15 05:01:35.491393 | orchestrator | | b8443312036e4986ba7dcfec6d8b994e | RegionOne | aodh | alarming | True | internal | https://api-int.testbed.osism.xyz:8042 | 2026-02-15 05:01:35.491405 | orchestrator | | c4575ad5d2894a358aa47d747c18dfd2 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-15 05:01:35.491422 | orchestrator | | c61979bcd5b546b392e781c652820676 | RegionOne | skyline | panel | True | internal | https://api-int.testbed.osism.xyz:9998 | 2026-02-15 05:01:35.491433 | orchestrator | | cba09db2f8514bce8aae33cda980d9bb | RegionOne | manila | share | True | public | https://api.testbed.osism.xyz:8786/v1/%(tenant_id)s | 2026-02-15 05:01:35.491444 | orchestrator | | cf86793dd9264326b30b9218a136a5a8 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2026-02-15 05:01:35.491455 | orchestrator | | d092071ec398456fb25294212be54309 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2026-02-15 05:01:35.491466 | orchestrator | | d0952b9fa2f24eaead0e347534e26f1e | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2026-02-15 05:01:35.491479 | orchestrator | | ebd47f318b034218986f9c8ce6282d7d | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2026-02-15 05:01:35.491491 | orchestrator | | f004b67db7d74c5598c6b4c758fc0578 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2026-02-15 05:01:35.491504 | orchestrator | | f5276f1fe8174c85a6cbc61344af8f72 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2026-02-15 05:01:35.491517 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2026-02-15 05:01:35.738641 | orchestrator | 2026-02-15 05:01:35.738759 | orchestrator | # Cinder 2026-02-15 05:01:35.738779 | orchestrator | 2026-02-15 05:01:35.738791 | orchestrator | + echo 2026-02-15 05:01:35.738803 | orchestrator | + echo '# Cinder' 2026-02-15 05:01:35.738815 | orchestrator | + echo 2026-02-15 05:01:35.738827 | orchestrator | + openstack volume service list 2026-02-15 05:01:38.426665 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-15 05:01:38.426781 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2026-02-15 05:01:38.426798 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-15 05:01:38.426810 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-15T05:01:33.000000 | 2026-02-15 05:01:38.426821 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-15T05:01:33.000000 | 2026-02-15 05:01:38.426832 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-15T05:01:33.000000 | 2026-02-15 05:01:38.426843 | orchestrator | | cinder-volume | testbed-node-0@rbd-volumes | nova | enabled | up | 2026-02-15T05:01:33.000000 | 2026-02-15 05:01:38.426853 | orchestrator | | cinder-volume | testbed-node-2@rbd-volumes | nova | enabled | up | 2026-02-15T05:01:29.000000 | 2026-02-15 05:01:38.426864 | orchestrator | | cinder-volume | testbed-node-1@rbd-volumes | nova | enabled | up | 2026-02-15T05:01:29.000000 | 2026-02-15 05:01:38.426875 | orchestrator | | cinder-backup | testbed-node-0 | nova | enabled | up | 2026-02-15T05:01:37.000000 | 2026-02-15 05:01:38.426886 | orchestrator | | cinder-backup | testbed-node-2 | nova | enabled | up | 2026-02-15T05:01:28.000000 | 2026-02-15 05:01:38.426896 | orchestrator | | cinder-backup | testbed-node-1 | nova | enabled | up | 2026-02-15T05:01:29.000000 | 2026-02-15 05:01:38.426932 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2026-02-15 05:01:38.672544 | orchestrator | 2026-02-15 05:01:38.672674 | orchestrator | # Neutron 2026-02-15 05:01:38.672692 | orchestrator | 2026-02-15 05:01:38.672704 | orchestrator | + echo 2026-02-15 05:01:38.672716 | orchestrator | + echo '# Neutron' 2026-02-15 05:01:38.672730 | orchestrator | + echo 2026-02-15 05:01:38.672742 | orchestrator | + openstack network agent list 2026-02-15 05:01:41.349009 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-15 05:01:41.349161 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2026-02-15 05:01:41.349178 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-15 05:01:41.349191 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2026-02-15 05:01:41.349202 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2026-02-15 05:01:41.349213 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2026-02-15 05:01:41.349243 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2026-02-15 05:01:41.349254 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2026-02-15 05:01:41.349265 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2026-02-15 05:01:41.349276 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-15 05:01:41.349286 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-15 05:01:41.349297 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2026-02-15 05:01:41.349308 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2026-02-15 05:01:41.618297 | orchestrator | + openstack network service provider list 2026-02-15 05:01:44.214720 | orchestrator | +---------------+------+---------+ 2026-02-15 05:01:44.214832 | orchestrator | | Service Type | Name | Default | 2026-02-15 05:01:44.214846 | orchestrator | +---------------+------+---------+ 2026-02-15 05:01:44.214858 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2026-02-15 05:01:44.214869 | orchestrator | +---------------+------+---------+ 2026-02-15 05:01:44.501458 | orchestrator | 2026-02-15 05:01:44.501560 | orchestrator | # Nova 2026-02-15 05:01:44.501576 | orchestrator | 2026-02-15 05:01:44.501587 | orchestrator | + echo 2026-02-15 05:01:44.501599 | orchestrator | + echo '# Nova' 2026-02-15 05:01:44.501611 | orchestrator | + echo 2026-02-15 05:01:44.501622 | orchestrator | + openstack compute service list 2026-02-15 05:01:47.339820 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-15 05:01:47.339938 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2026-02-15 05:01:47.339955 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-15 05:01:47.339967 | orchestrator | | 9481da52-cc0f-4fd7-9e61-c0b6b2594fd6 | nova-scheduler | testbed-node-0 | internal | enabled | up | 2026-02-15T05:01:46.000000 | 2026-02-15 05:01:47.340007 | orchestrator | | 8feeaf13-7127-4eb1-9bd8-2f93043f3056 | nova-scheduler | testbed-node-1 | internal | enabled | up | 2026-02-15T05:01:40.000000 | 2026-02-15 05:01:47.340020 | orchestrator | | 0335f9ac-164c-4f69-bd49-dff870087211 | nova-scheduler | testbed-node-2 | internal | enabled | up | 2026-02-15T05:01:41.000000 | 2026-02-15 05:01:47.340031 | orchestrator | | e06521bf-b95a-4408-8d61-94d154f68dab | nova-conductor | testbed-node-0 | internal | enabled | up | 2026-02-15T05:01:45.000000 | 2026-02-15 05:01:47.340042 | orchestrator | | a689be32-761e-46be-ab4d-016eef0538d1 | nova-conductor | testbed-node-1 | internal | enabled | up | 2026-02-15T05:01:46.000000 | 2026-02-15 05:01:47.340053 | orchestrator | | a411631d-7ec2-4a8c-9fc9-98de0db8e025 | nova-conductor | testbed-node-2 | internal | enabled | up | 2026-02-15T05:01:37.000000 | 2026-02-15 05:01:47.340100 | orchestrator | | dfb4ad42-92b5-4823-9672-284a86141bf7 | nova-compute | testbed-node-3 | nova | enabled | up | 2026-02-15T05:01:39.000000 | 2026-02-15 05:01:47.340115 | orchestrator | | 9fef8f9c-246b-49d7-b8db-3b65a5b43739 | nova-compute | testbed-node-4 | nova | enabled | up | 2026-02-15T05:01:40.000000 | 2026-02-15 05:01:47.340126 | orchestrator | | 27aa273d-58a3-440d-81ec-a80e8b02f4b8 | nova-compute | testbed-node-5 | nova | enabled | up | 2026-02-15T05:01:41.000000 | 2026-02-15 05:01:47.340137 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2026-02-15 05:01:47.627641 | orchestrator | + openstack hypervisor list 2026-02-15 05:01:50.843331 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-15 05:01:50.843406 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2026-02-15 05:01:50.843412 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-15 05:01:50.843416 | orchestrator | | b08529b5-6f8e-425e-9ac1-c6111f502b3a | testbed-node-3 | QEMU | 192.168.16.13 | up | 2026-02-15 05:01:50.843420 | orchestrator | | e4f66b81-76dd-4db3-81f0-8bf0923ad44c | testbed-node-4 | QEMU | 192.168.16.14 | up | 2026-02-15 05:01:50.843424 | orchestrator | | 7ff5c225-9739-4298-8413-ad89d1618e09 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2026-02-15 05:01:50.843428 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2026-02-15 05:01:51.111543 | orchestrator | 2026-02-15 05:01:51.111644 | orchestrator | # Run OpenStack test play 2026-02-15 05:01:51.111661 | orchestrator | 2026-02-15 05:01:51.111677 | orchestrator | + echo 2026-02-15 05:01:51.111690 | orchestrator | + echo '# Run OpenStack test play' 2026-02-15 05:01:51.111702 | orchestrator | + echo 2026-02-15 05:01:51.111714 | orchestrator | + osism apply --environment openstack test 2026-02-15 05:01:53.109972 | orchestrator | 2026-02-15 05:01:53 | INFO  | Trying to run play test in environment openstack 2026-02-15 05:02:03.249544 | orchestrator | 2026-02-15 05:02:03 | INFO  | Task c3f2e6df-0063-43fa-946c-c9828b6adcf7 (test) was prepared for execution. 2026-02-15 05:02:03.249665 | orchestrator | 2026-02-15 05:02:03 | INFO  | It takes a moment until task c3f2e6df-0063-43fa-946c-c9828b6adcf7 (test) has been started and output is visible here. 2026-02-15 05:04:37.864813 | orchestrator | 2026-02-15 05:04:37.864939 | orchestrator | PLAY [Create test project] ***************************************************** 2026-02-15 05:04:37.864959 | orchestrator | 2026-02-15 05:04:37.864973 | orchestrator | TASK [Create test domain] ****************************************************** 2026-02-15 05:04:37.864987 | orchestrator | Sunday 15 February 2026 05:02:07 +0000 (0:00:00.072) 0:00:00.072 ******* 2026-02-15 05:04:37.865001 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865017 | orchestrator | 2026-02-15 05:04:37.865031 | orchestrator | TASK [Create test-admin user] ************************************************** 2026-02-15 05:04:37.865046 | orchestrator | Sunday 15 February 2026 05:02:11 +0000 (0:00:03.632) 0:00:03.704 ******* 2026-02-15 05:04:37.865087 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865102 | orchestrator | 2026-02-15 05:04:37.865116 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2026-02-15 05:04:37.865131 | orchestrator | Sunday 15 February 2026 05:02:15 +0000 (0:00:04.235) 0:00:07.940 ******* 2026-02-15 05:04:37.865145 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865160 | orchestrator | 2026-02-15 05:04:37.865174 | orchestrator | TASK [Create test project] ***************************************************** 2026-02-15 05:04:37.865187 | orchestrator | Sunday 15 February 2026 05:02:21 +0000 (0:00:06.522) 0:00:14.462 ******* 2026-02-15 05:04:37.865202 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865217 | orchestrator | 2026-02-15 05:04:37.865231 | orchestrator | TASK [Create test user] ******************************************************** 2026-02-15 05:04:37.865246 | orchestrator | Sunday 15 February 2026 05:02:25 +0000 (0:00:03.973) 0:00:18.435 ******* 2026-02-15 05:04:37.865260 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865335 | orchestrator | 2026-02-15 05:04:37.865353 | orchestrator | TASK [Add member roles to user test] ******************************************* 2026-02-15 05:04:37.865368 | orchestrator | Sunday 15 February 2026 05:02:29 +0000 (0:00:04.174) 0:00:22.610 ******* 2026-02-15 05:04:37.865384 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2026-02-15 05:04:37.865399 | orchestrator | changed: [localhost] => (item=member) 2026-02-15 05:04:37.865442 | orchestrator | changed: [localhost] => (item=creator) 2026-02-15 05:04:37.865457 | orchestrator | 2026-02-15 05:04:37.865472 | orchestrator | TASK [Create test server group] ************************************************ 2026-02-15 05:04:37.865486 | orchestrator | Sunday 15 February 2026 05:02:41 +0000 (0:00:11.465) 0:00:34.075 ******* 2026-02-15 05:04:37.865501 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865514 | orchestrator | 2026-02-15 05:04:37.865529 | orchestrator | TASK [Create ssh security group] *********************************************** 2026-02-15 05:04:37.865542 | orchestrator | Sunday 15 February 2026 05:02:45 +0000 (0:00:04.283) 0:00:38.359 ******* 2026-02-15 05:04:37.865556 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865569 | orchestrator | 2026-02-15 05:04:37.865583 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2026-02-15 05:04:37.865597 | orchestrator | Sunday 15 February 2026 05:02:50 +0000 (0:00:04.913) 0:00:43.272 ******* 2026-02-15 05:04:37.865612 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865626 | orchestrator | 2026-02-15 05:04:37.865641 | orchestrator | TASK [Create icmp security group] ********************************************** 2026-02-15 05:04:37.865654 | orchestrator | Sunday 15 February 2026 05:02:55 +0000 (0:00:04.363) 0:00:47.636 ******* 2026-02-15 05:04:37.865668 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865683 | orchestrator | 2026-02-15 05:04:37.865698 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2026-02-15 05:04:37.865714 | orchestrator | Sunday 15 February 2026 05:02:59 +0000 (0:00:04.041) 0:00:51.678 ******* 2026-02-15 05:04:37.865728 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865743 | orchestrator | 2026-02-15 05:04:37.865756 | orchestrator | TASK [Create test keypair] ***************************************************** 2026-02-15 05:04:37.865771 | orchestrator | Sunday 15 February 2026 05:03:03 +0000 (0:00:04.034) 0:00:55.713 ******* 2026-02-15 05:04:37.865785 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865799 | orchestrator | 2026-02-15 05:04:37.865812 | orchestrator | TASK [Create test network] ***************************************************** 2026-02-15 05:04:37.865827 | orchestrator | Sunday 15 February 2026 05:03:07 +0000 (0:00:03.918) 0:00:59.632 ******* 2026-02-15 05:04:37.865842 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865855 | orchestrator | 2026-02-15 05:04:37.865869 | orchestrator | TASK [Create test subnet] ****************************************************** 2026-02-15 05:04:37.865883 | orchestrator | Sunday 15 February 2026 05:03:11 +0000 (0:00:04.801) 0:01:04.434 ******* 2026-02-15 05:04:37.865895 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865909 | orchestrator | 2026-02-15 05:04:37.865923 | orchestrator | TASK [Create test router] ****************************************************** 2026-02-15 05:04:37.865955 | orchestrator | Sunday 15 February 2026 05:03:17 +0000 (0:00:05.523) 0:01:09.957 ******* 2026-02-15 05:04:37.865969 | orchestrator | changed: [localhost] 2026-02-15 05:04:37.865982 | orchestrator | 2026-02-15 05:04:37.866062 | orchestrator | PLAY [Manage test instances and volumes] *************************************** 2026-02-15 05:04:37.866075 | orchestrator | 2026-02-15 05:04:37.866083 | orchestrator | TASK [Get test server group] *************************************************** 2026-02-15 05:04:37.866091 | orchestrator | Sunday 15 February 2026 05:03:28 +0000 (0:00:10.851) 0:01:20.808 ******* 2026-02-15 05:04:37.866099 | orchestrator | ok: [localhost] 2026-02-15 05:04:37.866108 | orchestrator | 2026-02-15 05:04:37.866117 | orchestrator | TASK [Detach test volume] ****************************************************** 2026-02-15 05:04:37.866125 | orchestrator | Sunday 15 February 2026 05:03:31 +0000 (0:00:03.599) 0:01:24.408 ******* 2026-02-15 05:04:37.866132 | orchestrator | skipping: [localhost] 2026-02-15 05:04:37.866140 | orchestrator | 2026-02-15 05:04:37.866148 | orchestrator | TASK [Delete test volume] ****************************************************** 2026-02-15 05:04:37.866156 | orchestrator | Sunday 15 February 2026 05:03:31 +0000 (0:00:00.055) 0:01:24.463 ******* 2026-02-15 05:04:37.866164 | orchestrator | skipping: [localhost] 2026-02-15 05:04:37.866172 | orchestrator | 2026-02-15 05:04:37.866184 | orchestrator | TASK [Delete test instances] *************************************************** 2026-02-15 05:04:37.866192 | orchestrator | Sunday 15 February 2026 05:03:31 +0000 (0:00:00.070) 0:01:24.533 ******* 2026-02-15 05:04:37.866200 | orchestrator | skipping: [localhost] => (item=test-4)  2026-02-15 05:04:37.866209 | orchestrator | skipping: [localhost] => (item=test-3)  2026-02-15 05:04:37.866237 | orchestrator | skipping: [localhost] => (item=test-2)  2026-02-15 05:04:37.866246 | orchestrator | skipping: [localhost] => (item=test-1)  2026-02-15 05:04:37.866254 | orchestrator | skipping: [localhost] => (item=test)  2026-02-15 05:04:37.866262 | orchestrator | skipping: [localhost] 2026-02-15 05:04:37.866270 | orchestrator | 2026-02-15 05:04:37.866278 | orchestrator | TASK [Wait for instance deletion to complete] ********************************** 2026-02-15 05:04:37.866286 | orchestrator | Sunday 15 February 2026 05:03:32 +0000 (0:00:00.170) 0:01:24.703 ******* 2026-02-15 05:04:37.866294 | orchestrator | skipping: [localhost] 2026-02-15 05:04:37.866302 | orchestrator | 2026-02-15 05:04:37.866310 | orchestrator | TASK [Create test instances] *************************************************** 2026-02-15 05:04:37.866318 | orchestrator | Sunday 15 February 2026 05:03:32 +0000 (0:00:00.146) 0:01:24.850 ******* 2026-02-15 05:04:37.866326 | orchestrator | changed: [localhost] => (item=test) 2026-02-15 05:04:37.866334 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-15 05:04:37.866342 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-15 05:04:37.866349 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-15 05:04:37.866357 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-15 05:04:37.866365 | orchestrator | 2026-02-15 05:04:37.866373 | orchestrator | TASK [Wait for instance creation to complete] ********************************** 2026-02-15 05:04:37.866381 | orchestrator | Sunday 15 February 2026 05:03:36 +0000 (0:00:04.567) 0:01:29.417 ******* 2026-02-15 05:04:37.866389 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (60 retries left). 2026-02-15 05:04:37.866398 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (59 retries left). 2026-02-15 05:04:37.866442 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (58 retries left). 2026-02-15 05:04:37.866453 | orchestrator | FAILED - RETRYING: [localhost]: Wait for instance creation to complete (57 retries left). 2026-02-15 05:04:37.866463 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j336342206029.3752', 'results_file': '/ansible/.ansible_async/j336342206029.3752', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-15 05:04:37.866474 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j283242980251.3777', 'results_file': '/ansible/.ansible_async/j283242980251.3777', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-15 05:04:37.866491 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j570328528346.3802', 'results_file': '/ansible/.ansible_async/j570328528346.3802', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-15 05:04:37.866500 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j634327915547.3827', 'results_file': '/ansible/.ansible_async/j634327915547.3827', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-15 05:04:37.866508 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j412274000613.3852', 'results_file': '/ansible/.ansible_async/j412274000613.3852', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-15 05:04:37.866516 | orchestrator | 2026-02-15 05:04:37.866524 | orchestrator | TASK [Add metadata to instances] *********************************************** 2026-02-15 05:04:37.866532 | orchestrator | Sunday 15 February 2026 05:04:23 +0000 (0:00:46.987) 0:02:16.405 ******* 2026-02-15 05:04:37.866540 | orchestrator | changed: [localhost] => (item=test) 2026-02-15 05:04:37.866548 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-15 05:04:37.866556 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-15 05:04:37.866564 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-15 05:04:37.866571 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-15 05:04:37.866579 | orchestrator | 2026-02-15 05:04:37.866587 | orchestrator | TASK [Wait for metadata to be added] ******************************************* 2026-02-15 05:04:37.866595 | orchestrator | Sunday 15 February 2026 05:04:28 +0000 (0:00:04.593) 0:02:20.999 ******* 2026-02-15 05:04:37.866603 | orchestrator | FAILED - RETRYING: [localhost]: Wait for metadata to be added (30 retries left). 2026-02-15 05:04:37.866612 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j284568096126.3957', 'results_file': '/ansible/.ansible_async/j284568096126.3957', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-15 05:04:37.866621 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j607607535693.3982', 'results_file': '/ansible/.ansible_async/j607607535693.3982', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-15 05:04:37.866634 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j982257758879.4007', 'results_file': '/ansible/.ansible_async/j982257758879.4007', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-15 05:04:37.866649 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j708291513166.4032', 'results_file': '/ansible/.ansible_async/j708291513166.4032', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-15 05:05:19.177603 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j85630626660.4057', 'results_file': '/ansible/.ansible_async/j85630626660.4057', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-15 05:05:19.177745 | orchestrator | 2026-02-15 05:05:19.177771 | orchestrator | TASK [Add tag to instances] **************************************************** 2026-02-15 05:05:19.177795 | orchestrator | Sunday 15 February 2026 05:04:37 +0000 (0:00:09.468) 0:02:30.467 ******* 2026-02-15 05:05:19.177818 | orchestrator | changed: [localhost] => (item=test) 2026-02-15 05:05:19.177843 | orchestrator | changed: [localhost] => (item=test-1) 2026-02-15 05:05:19.177866 | orchestrator | changed: [localhost] => (item=test-2) 2026-02-15 05:05:19.177888 | orchestrator | changed: [localhost] => (item=test-3) 2026-02-15 05:05:19.177910 | orchestrator | changed: [localhost] => (item=test-4) 2026-02-15 05:05:19.177933 | orchestrator | 2026-02-15 05:05:19.177954 | orchestrator | TASK [Wait for tags to be added] *********************************************** 2026-02-15 05:05:19.178010 | orchestrator | Sunday 15 February 2026 05:04:42 +0000 (0:00:04.852) 0:02:35.320 ******* 2026-02-15 05:05:19.178105 | orchestrator | FAILED - RETRYING: [localhost]: Wait for tags to be added (30 retries left). 2026-02-15 05:05:19.178132 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j453285950440.4126', 'results_file': '/ansible/.ansible_async/j453285950440.4126', 'changed': True, 'item': 'test', 'ansible_loop_var': 'item'}) 2026-02-15 05:05:19.178157 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j426129638507.4151', 'results_file': '/ansible/.ansible_async/j426129638507.4151', 'changed': True, 'item': 'test-1', 'ansible_loop_var': 'item'}) 2026-02-15 05:05:19.178181 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j742772733767.4177', 'results_file': '/ansible/.ansible_async/j742772733767.4177', 'changed': True, 'item': 'test-2', 'ansible_loop_var': 'item'}) 2026-02-15 05:05:19.178204 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j422120430706.4203', 'results_file': '/ansible/.ansible_async/j422120430706.4203', 'changed': True, 'item': 'test-3', 'ansible_loop_var': 'item'}) 2026-02-15 05:05:19.178228 | orchestrator | changed: [localhost] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j357572043237.4229', 'results_file': '/ansible/.ansible_async/j357572043237.4229', 'changed': True, 'item': 'test-4', 'ansible_loop_var': 'item'}) 2026-02-15 05:05:19.178252 | orchestrator | 2026-02-15 05:05:19.178311 | orchestrator | TASK [Create test volume] ****************************************************** 2026-02-15 05:05:19.178335 | orchestrator | Sunday 15 February 2026 05:04:53 +0000 (0:00:10.818) 0:02:46.138 ******* 2026-02-15 05:05:19.178360 | orchestrator | changed: [localhost] 2026-02-15 05:05:19.178382 | orchestrator | 2026-02-15 05:05:19.178405 | orchestrator | TASK [Attach test volume] ****************************************************** 2026-02-15 05:05:19.178430 | orchestrator | Sunday 15 February 2026 05:05:00 +0000 (0:00:06.598) 0:02:52.736 ******* 2026-02-15 05:05:19.178453 | orchestrator | changed: [localhost] 2026-02-15 05:05:19.178477 | orchestrator | 2026-02-15 05:05:19.178499 | orchestrator | TASK [Create floating ip address] ********************************************** 2026-02-15 05:05:19.178521 | orchestrator | Sunday 15 February 2026 05:05:13 +0000 (0:00:13.658) 0:03:06.395 ******* 2026-02-15 05:05:19.178543 | orchestrator | ok: [localhost] 2026-02-15 05:05:19.178566 | orchestrator | 2026-02-15 05:05:19.178588 | orchestrator | TASK [Print floating ip address] *********************************************** 2026-02-15 05:05:19.178610 | orchestrator | Sunday 15 February 2026 05:05:18 +0000 (0:00:05.061) 0:03:11.456 ******* 2026-02-15 05:05:19.178631 | orchestrator | ok: [localhost] => { 2026-02-15 05:05:19.178653 | orchestrator |  "msg": "192.168.112.136" 2026-02-15 05:05:19.178676 | orchestrator | } 2026-02-15 05:05:19.178696 | orchestrator | 2026-02-15 05:05:19.178718 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:05:19.178737 | orchestrator | localhost : ok=26  changed=23  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 05:05:19.178757 | orchestrator | 2026-02-15 05:05:19.178780 | orchestrator | 2026-02-15 05:05:19.178802 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:05:19.178825 | orchestrator | Sunday 15 February 2026 05:05:18 +0000 (0:00:00.046) 0:03:11.503 ******* 2026-02-15 05:05:19.178846 | orchestrator | =============================================================================== 2026-02-15 05:05:19.178868 | orchestrator | Wait for instance creation to complete --------------------------------- 46.99s 2026-02-15 05:05:19.178891 | orchestrator | Attach test volume ----------------------------------------------------- 13.66s 2026-02-15 05:05:19.178913 | orchestrator | Add member roles to user test ------------------------------------------ 11.47s 2026-02-15 05:05:19.178972 | orchestrator | Create test router ----------------------------------------------------- 10.85s 2026-02-15 05:05:19.178994 | orchestrator | Wait for tags to be added ---------------------------------------------- 10.82s 2026-02-15 05:05:19.179016 | orchestrator | Wait for metadata to be added ------------------------------------------- 9.47s 2026-02-15 05:05:19.179037 | orchestrator | Create test volume ------------------------------------------------------ 6.60s 2026-02-15 05:05:19.179086 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.52s 2026-02-15 05:05:19.179107 | orchestrator | Create test subnet ------------------------------------------------------ 5.52s 2026-02-15 05:05:19.179128 | orchestrator | Create floating ip address ---------------------------------------------- 5.06s 2026-02-15 05:05:19.179149 | orchestrator | Create ssh security group ----------------------------------------------- 4.91s 2026-02-15 05:05:19.179169 | orchestrator | Add tag to instances ---------------------------------------------------- 4.85s 2026-02-15 05:05:19.179190 | orchestrator | Create test network ----------------------------------------------------- 4.80s 2026-02-15 05:05:19.179211 | orchestrator | Add metadata to instances ----------------------------------------------- 4.59s 2026-02-15 05:05:19.179231 | orchestrator | Create test instances --------------------------------------------------- 4.57s 2026-02-15 05:05:19.179254 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.36s 2026-02-15 05:05:19.179315 | orchestrator | Create test server group ------------------------------------------------ 4.28s 2026-02-15 05:05:19.179336 | orchestrator | Create test-admin user -------------------------------------------------- 4.24s 2026-02-15 05:05:19.179357 | orchestrator | Create test user -------------------------------------------------------- 4.17s 2026-02-15 05:05:19.179379 | orchestrator | Create icmp security group ---------------------------------------------- 4.04s 2026-02-15 05:05:19.515803 | orchestrator | + server_list 2026-02-15 05:05:19.515869 | orchestrator | + openstack --os-cloud test server list 2026-02-15 05:05:23.167259 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-15 05:05:23.167420 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2026-02-15 05:05:23.167445 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-15 05:05:23.167464 | orchestrator | | 3ec1e0e8-e6ca-4027-8f78-a7a69b658cea | test-4 | ACTIVE | test=192.168.112.163, 192.168.200.155 | N/A (booted from volume) | SCS-1L-1 | 2026-02-15 05:05:23.167482 | orchestrator | | eaf150d2-1a5d-4ea2-abcf-2c2c47e7e82f | test-2 | ACTIVE | test=192.168.112.180, 192.168.200.225 | N/A (booted from volume) | SCS-1L-1 | 2026-02-15 05:05:23.167494 | orchestrator | | f44a52ef-1bb9-4018-833f-149fb57c6cbd | test-3 | ACTIVE | test=192.168.112.195, 192.168.200.203 | N/A (booted from volume) | SCS-1L-1 | 2026-02-15 05:05:23.167505 | orchestrator | | 0a499364-76c3-4aeb-a1d5-1026fc640ff6 | test | ACTIVE | test=192.168.112.136, 192.168.200.111 | N/A (booted from volume) | SCS-1L-1 | 2026-02-15 05:05:23.167516 | orchestrator | | 59fac935-cdcb-490c-8bb2-5451d4c2af18 | test-1 | ACTIVE | test=192.168.112.108, 192.168.200.92 | N/A (booted from volume) | SCS-1L-1 | 2026-02-15 05:05:23.167527 | orchestrator | +--------------------------------------+--------+--------+---------------------------------------+--------------------------+----------+ 2026-02-15 05:05:23.420484 | orchestrator | + openstack --os-cloud test server show test 2026-02-15 05:05:26.583969 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:26.584110 | orchestrator | | Field | Value | 2026-02-15 05:05:26.584170 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:26.584219 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-15 05:05:26.584280 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-15 05:05:26.584301 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-15 05:05:26.584321 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2026-02-15 05:05:26.584374 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-15 05:05:26.584398 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-15 05:05:26.584439 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-15 05:05:26.584460 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-15 05:05:26.584495 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-15 05:05:26.584516 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-15 05:05:26.584555 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-15 05:05:26.584576 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-15 05:05:26.584596 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-15 05:05:26.584616 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-15 05:05:26.584636 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-15 05:05:26.584655 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-15T05:04:09.000000 | 2026-02-15 05:05:26.584685 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-15 05:05:26.584724 | orchestrator | | accessIPv4 | | 2026-02-15 05:05:26.584744 | orchestrator | | accessIPv6 | | 2026-02-15 05:05:26.584766 | orchestrator | | addresses | test=192.168.112.136, 192.168.200.111 | 2026-02-15 05:05:26.584785 | orchestrator | | config_drive | | 2026-02-15 05:05:26.584805 | orchestrator | | created | 2026-02-15T05:03:42Z | 2026-02-15 05:05:26.584836 | orchestrator | | description | None | 2026-02-15 05:05:26.584856 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-15 05:05:26.584876 | orchestrator | | hostId | 036b6326a1d98f4ee227ee910502851614e0490b7949d4814d841310 | 2026-02-15 05:05:26.584894 | orchestrator | | host_status | None | 2026-02-15 05:05:26.584935 | orchestrator | | id | 0a499364-76c3-4aeb-a1d5-1026fc640ff6 | 2026-02-15 05:05:26.584954 | orchestrator | | image | N/A (booted from volume) | 2026-02-15 05:05:26.584974 | orchestrator | | key_name | test | 2026-02-15 05:05:26.584993 | orchestrator | | locked | False | 2026-02-15 05:05:26.585020 | orchestrator | | locked_reason | None | 2026-02-15 05:05:26.585040 | orchestrator | | name | test | 2026-02-15 05:05:26.585059 | orchestrator | | pinned_availability_zone | None | 2026-02-15 05:05:26.585077 | orchestrator | | progress | 0 | 2026-02-15 05:05:26.585095 | orchestrator | | project_id | 5df93ba4bc674eefac5332cd7fcc3b29 | 2026-02-15 05:05:26.585115 | orchestrator | | properties | hostname='test' | 2026-02-15 05:05:26.585155 | orchestrator | | security_groups | name='ssh' | 2026-02-15 05:05:26.585175 | orchestrator | | | name='icmp' | 2026-02-15 05:05:26.585193 | orchestrator | | server_groups | None | 2026-02-15 05:05:26.585212 | orchestrator | | status | ACTIVE | 2026-02-15 05:05:26.585239 | orchestrator | | tags | test | 2026-02-15 05:05:26.585284 | orchestrator | | trusted_image_certificates | None | 2026-02-15 05:05:26.585297 | orchestrator | | updated | 2026-02-15T05:04:29Z | 2026-02-15 05:05:26.585308 | orchestrator | | user_id | a4962dabc48644498d47bd2724b2358b | 2026-02-15 05:05:26.585319 | orchestrator | | volumes_attached | delete_on_termination='True', id='02be034a-8a0c-4c94-9a3a-99b505a42988' | 2026-02-15 05:05:26.585340 | orchestrator | | | delete_on_termination='False', id='f325cd83-4711-47c1-956d-ffc4eedff52d' | 2026-02-15 05:05:26.587605 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:26.866707 | orchestrator | + openstack --os-cloud test server show test-1 2026-02-15 05:05:29.836431 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:29.836541 | orchestrator | | Field | Value | 2026-02-15 05:05:29.836568 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:29.836581 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-15 05:05:29.836592 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-15 05:05:29.836604 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-15 05:05:29.836615 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2026-02-15 05:05:29.836647 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-15 05:05:29.836659 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-15 05:05:29.836687 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-15 05:05:29.836700 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-15 05:05:29.836711 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-15 05:05:29.836727 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-15 05:05:29.836739 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-15 05:05:29.836750 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-15 05:05:29.836761 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-15 05:05:29.836781 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-15 05:05:29.836793 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-15 05:05:29.836804 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-15T05:04:08.000000 | 2026-02-15 05:05:29.836822 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-15 05:05:29.836834 | orchestrator | | accessIPv4 | | 2026-02-15 05:05:29.836845 | orchestrator | | accessIPv6 | | 2026-02-15 05:05:29.836860 | orchestrator | | addresses | test=192.168.112.108, 192.168.200.92 | 2026-02-15 05:05:29.836872 | orchestrator | | config_drive | | 2026-02-15 05:05:29.836884 | orchestrator | | created | 2026-02-15T05:03:42Z | 2026-02-15 05:05:29.836895 | orchestrator | | description | None | 2026-02-15 05:05:29.836913 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-15 05:05:29.836924 | orchestrator | | hostId | 036b6326a1d98f4ee227ee910502851614e0490b7949d4814d841310 | 2026-02-15 05:05:29.836936 | orchestrator | | host_status | None | 2026-02-15 05:05:29.836955 | orchestrator | | id | 59fac935-cdcb-490c-8bb2-5451d4c2af18 | 2026-02-15 05:05:29.836970 | orchestrator | | image | N/A (booted from volume) | 2026-02-15 05:05:29.836983 | orchestrator | | key_name | test | 2026-02-15 05:05:29.836997 | orchestrator | | locked | False | 2026-02-15 05:05:29.837010 | orchestrator | | locked_reason | None | 2026-02-15 05:05:29.837023 | orchestrator | | name | test-1 | 2026-02-15 05:05:29.837048 | orchestrator | | pinned_availability_zone | None | 2026-02-15 05:05:29.837062 | orchestrator | | progress | 0 | 2026-02-15 05:05:29.837076 | orchestrator | | project_id | 5df93ba4bc674eefac5332cd7fcc3b29 | 2026-02-15 05:05:29.837089 | orchestrator | | properties | hostname='test-1' | 2026-02-15 05:05:29.837110 | orchestrator | | security_groups | name='ssh' | 2026-02-15 05:05:29.837124 | orchestrator | | | name='icmp' | 2026-02-15 05:05:29.837138 | orchestrator | | server_groups | None | 2026-02-15 05:05:29.837156 | orchestrator | | status | ACTIVE | 2026-02-15 05:05:29.837170 | orchestrator | | tags | test | 2026-02-15 05:05:29.837192 | orchestrator | | trusted_image_certificates | None | 2026-02-15 05:05:29.837207 | orchestrator | | updated | 2026-02-15T05:04:30Z | 2026-02-15 05:05:29.837221 | orchestrator | | user_id | a4962dabc48644498d47bd2724b2358b | 2026-02-15 05:05:29.837235 | orchestrator | | volumes_attached | delete_on_termination='True', id='5d344371-e2db-4949-b9f4-200ef2382147' | 2026-02-15 05:05:29.840175 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:30.105325 | orchestrator | + openstack --os-cloud test server show test-2 2026-02-15 05:05:33.159690 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:33.159802 | orchestrator | | Field | Value | 2026-02-15 05:05:33.159819 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:33.159839 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-15 05:05:33.159869 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-15 05:05:33.159882 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-15 05:05:33.159893 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2026-02-15 05:05:33.159904 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-15 05:05:33.159915 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-15 05:05:33.159943 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-15 05:05:33.159955 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-15 05:05:33.159967 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-15 05:05:33.159978 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-15 05:05:33.160001 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-15 05:05:33.160013 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-15 05:05:33.160024 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-15 05:05:33.160035 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-15 05:05:33.160046 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-15 05:05:33.160057 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-15T05:04:09.000000 | 2026-02-15 05:05:33.160075 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-15 05:05:33.160087 | orchestrator | | accessIPv4 | | 2026-02-15 05:05:33.160098 | orchestrator | | accessIPv6 | | 2026-02-15 05:05:33.160113 | orchestrator | | addresses | test=192.168.112.180, 192.168.200.225 | 2026-02-15 05:05:33.160132 | orchestrator | | config_drive | | 2026-02-15 05:05:33.160143 | orchestrator | | created | 2026-02-15T05:03:43Z | 2026-02-15 05:05:33.160155 | orchestrator | | description | None | 2026-02-15 05:05:33.160166 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-15 05:05:33.160177 | orchestrator | | hostId | cea69c3fd230c0222fac4794cec61fe2b425e6cb2c5041e8b73c3cd7 | 2026-02-15 05:05:33.160188 | orchestrator | | host_status | None | 2026-02-15 05:05:33.160206 | orchestrator | | id | eaf150d2-1a5d-4ea2-abcf-2c2c47e7e82f | 2026-02-15 05:05:33.160218 | orchestrator | | image | N/A (booted from volume) | 2026-02-15 05:05:33.160260 | orchestrator | | key_name | test | 2026-02-15 05:05:33.160288 | orchestrator | | locked | False | 2026-02-15 05:05:33.160302 | orchestrator | | locked_reason | None | 2026-02-15 05:05:33.160316 | orchestrator | | name | test-2 | 2026-02-15 05:05:33.160329 | orchestrator | | pinned_availability_zone | None | 2026-02-15 05:05:33.160342 | orchestrator | | progress | 0 | 2026-02-15 05:05:33.160356 | orchestrator | | project_id | 5df93ba4bc674eefac5332cd7fcc3b29 | 2026-02-15 05:05:33.160369 | orchestrator | | properties | hostname='test-2' | 2026-02-15 05:05:33.160390 | orchestrator | | security_groups | name='ssh' | 2026-02-15 05:05:33.160404 | orchestrator | | | name='icmp' | 2026-02-15 05:05:33.160423 | orchestrator | | server_groups | None | 2026-02-15 05:05:33.160441 | orchestrator | | status | ACTIVE | 2026-02-15 05:05:33.160471 | orchestrator | | tags | test | 2026-02-15 05:05:33.160495 | orchestrator | | trusted_image_certificates | None | 2026-02-15 05:05:33.160509 | orchestrator | | updated | 2026-02-15T05:04:31Z | 2026-02-15 05:05:33.160522 | orchestrator | | user_id | a4962dabc48644498d47bd2724b2358b | 2026-02-15 05:05:33.160535 | orchestrator | | volumes_attached | delete_on_termination='True', id='23dcff87-1f35-40df-9209-115f0d3e01c7' | 2026-02-15 05:05:33.163488 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:33.414182 | orchestrator | + openstack --os-cloud test server show test-3 2026-02-15 05:05:36.332555 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:36.332673 | orchestrator | | Field | Value | 2026-02-15 05:05:36.332687 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:36.332697 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-15 05:05:36.332706 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-15 05:05:36.333064 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-15 05:05:36.333077 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2026-02-15 05:05:36.333087 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-15 05:05:36.333096 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-15 05:05:36.333122 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-15 05:05:36.333147 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-15 05:05:36.333157 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-15 05:05:36.333166 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-15 05:05:36.333175 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-15 05:05:36.333184 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-15 05:05:36.333192 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-15 05:05:36.333201 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-15 05:05:36.333210 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-15 05:05:36.333219 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-15T05:04:09.000000 | 2026-02-15 05:05:36.333263 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-15 05:05:36.333283 | orchestrator | | accessIPv4 | | 2026-02-15 05:05:36.333292 | orchestrator | | accessIPv6 | | 2026-02-15 05:05:36.333301 | orchestrator | | addresses | test=192.168.112.195, 192.168.200.203 | 2026-02-15 05:05:36.333310 | orchestrator | | config_drive | | 2026-02-15 05:05:36.333319 | orchestrator | | created | 2026-02-15T05:03:43Z | 2026-02-15 05:05:36.333328 | orchestrator | | description | None | 2026-02-15 05:05:36.333337 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-15 05:05:36.333346 | orchestrator | | hostId | cea69c3fd230c0222fac4794cec61fe2b425e6cb2c5041e8b73c3cd7 | 2026-02-15 05:05:36.333355 | orchestrator | | host_status | None | 2026-02-15 05:05:36.333376 | orchestrator | | id | f44a52ef-1bb9-4018-833f-149fb57c6cbd | 2026-02-15 05:05:36.333389 | orchestrator | | image | N/A (booted from volume) | 2026-02-15 05:05:36.333399 | orchestrator | | key_name | test | 2026-02-15 05:05:36.333408 | orchestrator | | locked | False | 2026-02-15 05:05:36.333417 | orchestrator | | locked_reason | None | 2026-02-15 05:05:36.333426 | orchestrator | | name | test-3 | 2026-02-15 05:05:36.333434 | orchestrator | | pinned_availability_zone | None | 2026-02-15 05:05:36.333444 | orchestrator | | progress | 0 | 2026-02-15 05:05:36.333453 | orchestrator | | project_id | 5df93ba4bc674eefac5332cd7fcc3b29 | 2026-02-15 05:05:36.333467 | orchestrator | | properties | hostname='test-3' | 2026-02-15 05:05:36.333483 | orchestrator | | security_groups | name='ssh' | 2026-02-15 05:05:36.333496 | orchestrator | | | name='icmp' | 2026-02-15 05:05:36.333505 | orchestrator | | server_groups | None | 2026-02-15 05:05:36.333514 | orchestrator | | status | ACTIVE | 2026-02-15 05:05:36.333523 | orchestrator | | tags | test | 2026-02-15 05:05:36.333532 | orchestrator | | trusted_image_certificates | None | 2026-02-15 05:05:36.333541 | orchestrator | | updated | 2026-02-15T05:04:31Z | 2026-02-15 05:05:36.333550 | orchestrator | | user_id | a4962dabc48644498d47bd2724b2358b | 2026-02-15 05:05:36.333566 | orchestrator | | volumes_attached | delete_on_termination='True', id='895d9a2f-40ef-44df-bfe2-0525a4d9e1e2' | 2026-02-15 05:05:36.338131 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:36.596782 | orchestrator | + openstack --os-cloud test server show test-4 2026-02-15 05:05:39.653566 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:39.653692 | orchestrator | | Field | Value | 2026-02-15 05:05:39.653709 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:39.653721 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2026-02-15 05:05:39.653733 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2026-02-15 05:05:39.653744 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2026-02-15 05:05:39.653755 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2026-02-15 05:05:39.653789 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2026-02-15 05:05:39.653801 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2026-02-15 05:05:39.653830 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2026-02-15 05:05:39.653842 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2026-02-15 05:05:39.653887 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2026-02-15 05:05:39.653899 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2026-02-15 05:05:39.653910 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2026-02-15 05:05:39.653922 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2026-02-15 05:05:39.653933 | orchestrator | | OS-EXT-STS:power_state | Running | 2026-02-15 05:05:39.653944 | orchestrator | | OS-EXT-STS:task_state | None | 2026-02-15 05:05:39.653964 | orchestrator | | OS-EXT-STS:vm_state | active | 2026-02-15 05:05:39.653976 | orchestrator | | OS-SRV-USG:launched_at | 2026-02-15T05:04:09.000000 | 2026-02-15 05:05:39.653995 | orchestrator | | OS-SRV-USG:terminated_at | None | 2026-02-15 05:05:39.654012 | orchestrator | | accessIPv4 | | 2026-02-15 05:05:39.654082 | orchestrator | | accessIPv6 | | 2026-02-15 05:05:39.654095 | orchestrator | | addresses | test=192.168.112.163, 192.168.200.155 | 2026-02-15 05:05:39.654106 | orchestrator | | config_drive | | 2026-02-15 05:05:39.654117 | orchestrator | | created | 2026-02-15T05:03:44Z | 2026-02-15 05:05:39.654129 | orchestrator | | description | None | 2026-02-15 05:05:39.654148 | orchestrator | | flavor | description=, disk='0', ephemeral='0', extra_specs.hw_rng:allowed='true', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:disk0-type='network', extra_specs.scs:name-v1='SCS-1L:1', extra_specs.scs:name-v2='SCS-1L-1', id='SCS-1L-1', is_disabled=, is_public='True', location=, name='SCS-1L-1', original_name='SCS-1L-1', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2026-02-15 05:05:39.654159 | orchestrator | | hostId | cea69c3fd230c0222fac4794cec61fe2b425e6cb2c5041e8b73c3cd7 | 2026-02-15 05:05:39.654170 | orchestrator | | host_status | None | 2026-02-15 05:05:39.654189 | orchestrator | | id | 3ec1e0e8-e6ca-4027-8f78-a7a69b658cea | 2026-02-15 05:05:39.654201 | orchestrator | | image | N/A (booted from volume) | 2026-02-15 05:05:39.654241 | orchestrator | | key_name | test | 2026-02-15 05:05:39.654337 | orchestrator | | locked | False | 2026-02-15 05:05:39.654358 | orchestrator | | locked_reason | None | 2026-02-15 05:05:39.654370 | orchestrator | | name | test-4 | 2026-02-15 05:05:39.654389 | orchestrator | | pinned_availability_zone | None | 2026-02-15 05:05:39.654400 | orchestrator | | progress | 0 | 2026-02-15 05:05:39.654412 | orchestrator | | project_id | 5df93ba4bc674eefac5332cd7fcc3b29 | 2026-02-15 05:05:39.654423 | orchestrator | | properties | hostname='test-4' | 2026-02-15 05:05:39.654444 | orchestrator | | security_groups | name='ssh' | 2026-02-15 05:05:39.654461 | orchestrator | | | name='icmp' | 2026-02-15 05:05:39.654473 | orchestrator | | server_groups | None | 2026-02-15 05:05:39.654485 | orchestrator | | status | ACTIVE | 2026-02-15 05:05:39.654496 | orchestrator | | tags | test | 2026-02-15 05:05:39.654514 | orchestrator | | trusted_image_certificates | None | 2026-02-15 05:05:39.654526 | orchestrator | | updated | 2026-02-15T05:04:32Z | 2026-02-15 05:05:39.654537 | orchestrator | | user_id | a4962dabc48644498d47bd2724b2358b | 2026-02-15 05:05:39.654548 | orchestrator | | volumes_attached | delete_on_termination='True', id='0c3fcaf5-1060-4992-8321-c6a0fcfda0d3' | 2026-02-15 05:05:39.658550 | orchestrator | +-------------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2026-02-15 05:05:39.927611 | orchestrator | + server_ping 2026-02-15 05:05:39.929162 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2026-02-15 05:05:39.929192 | orchestrator | ++ tr -d '\r' 2026-02-15 05:05:42.935068 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-15 05:05:42.935167 | orchestrator | + ping -c3 192.168.112.163 2026-02-15 05:05:42.953583 | orchestrator | PING 192.168.112.163 (192.168.112.163) 56(84) bytes of data. 2026-02-15 05:05:42.953679 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=1 ttl=63 time=11.0 ms 2026-02-15 05:05:43.946014 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=2 ttl=63 time=2.28 ms 2026-02-15 05:05:44.947597 | orchestrator | 64 bytes from 192.168.112.163: icmp_seq=3 ttl=63 time=1.82 ms 2026-02-15 05:05:44.947692 | orchestrator | 2026-02-15 05:05:44.947709 | orchestrator | --- 192.168.112.163 ping statistics --- 2026-02-15 05:05:44.947723 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-15 05:05:44.947747 | orchestrator | rtt min/avg/max/mdev = 1.823/5.033/10.999/4.222 ms 2026-02-15 05:05:44.947760 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-15 05:05:44.947772 | orchestrator | + ping -c3 192.168.112.108 2026-02-15 05:05:44.959147 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2026-02-15 05:05:44.959265 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=6.98 ms 2026-02-15 05:05:45.955039 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.20 ms 2026-02-15 05:05:46.956190 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=1.99 ms 2026-02-15 05:05:46.956317 | orchestrator | 2026-02-15 05:05:46.956332 | orchestrator | --- 192.168.112.108 ping statistics --- 2026-02-15 05:05:46.956345 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2001ms 2026-02-15 05:05:46.956356 | orchestrator | rtt min/avg/max/mdev = 1.992/3.723/6.979/2.303 ms 2026-02-15 05:05:46.956399 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-15 05:05:46.956413 | orchestrator | + ping -c3 192.168.112.180 2026-02-15 05:05:46.971467 | orchestrator | PING 192.168.112.180 (192.168.112.180) 56(84) bytes of data. 2026-02-15 05:05:46.971551 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=1 ttl=63 time=10.1 ms 2026-02-15 05:05:47.965422 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=2 ttl=63 time=2.44 ms 2026-02-15 05:05:48.965494 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=3 ttl=63 time=1.72 ms 2026-02-15 05:05:48.965574 | orchestrator | 2026-02-15 05:05:48.965584 | orchestrator | --- 192.168.112.180 ping statistics --- 2026-02-15 05:05:48.965592 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2026-02-15 05:05:48.965600 | orchestrator | rtt min/avg/max/mdev = 1.722/4.745/10.076/3.780 ms 2026-02-15 05:05:48.966110 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-15 05:05:48.966528 | orchestrator | + ping -c3 192.168.112.195 2026-02-15 05:05:48.981642 | orchestrator | PING 192.168.112.195 (192.168.112.195) 56(84) bytes of data. 2026-02-15 05:05:48.981734 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=1 ttl=63 time=10.6 ms 2026-02-15 05:05:49.975756 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=2 ttl=63 time=3.05 ms 2026-02-15 05:05:50.976019 | orchestrator | 64 bytes from 192.168.112.195: icmp_seq=3 ttl=63 time=2.08 ms 2026-02-15 05:05:50.976124 | orchestrator | 2026-02-15 05:05:50.976320 | orchestrator | --- 192.168.112.195 ping statistics --- 2026-02-15 05:05:50.976336 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-15 05:05:50.976347 | orchestrator | rtt min/avg/max/mdev = 2.076/5.233/10.573/3.796 ms 2026-02-15 05:05:50.976370 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2026-02-15 05:05:50.976383 | orchestrator | + ping -c3 192.168.112.136 2026-02-15 05:05:50.990867 | orchestrator | PING 192.168.112.136 (192.168.112.136) 56(84) bytes of data. 2026-02-15 05:05:50.990986 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=1 ttl=63 time=9.85 ms 2026-02-15 05:05:51.984535 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=2 ttl=63 time=2.59 ms 2026-02-15 05:05:52.986357 | orchestrator | 64 bytes from 192.168.112.136: icmp_seq=3 ttl=63 time=2.03 ms 2026-02-15 05:05:52.986455 | orchestrator | 2026-02-15 05:05:52.986471 | orchestrator | --- 192.168.112.136 ping statistics --- 2026-02-15 05:05:52.986483 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2026-02-15 05:05:52.986495 | orchestrator | rtt min/avg/max/mdev = 2.032/4.824/9.854/3.563 ms 2026-02-15 05:05:52.986507 | orchestrator | + [[ 9.5.0 == \l\a\t\e\s\t ]] 2026-02-15 05:05:53.423742 | orchestrator | ok: Runtime: 0:07:54.643017 2026-02-15 05:05:53.485037 | 2026-02-15 05:05:53.485238 | TASK [Run tempest] 2026-02-15 05:05:54.023341 | orchestrator | skipping: Conditional result was False 2026-02-15 05:05:54.032906 | 2026-02-15 05:05:54.033031 | TASK [Check prometheus alert status] 2026-02-15 05:05:54.567966 | orchestrator | skipping: Conditional result was False 2026-02-15 05:05:54.579850 | 2026-02-15 05:05:54.579994 | PLAY [Upgrade testbed] 2026-02-15 05:05:54.590922 | 2026-02-15 05:05:54.591041 | TASK [Print next ceph version] 2026-02-15 05:05:54.667821 | orchestrator | ok 2026-02-15 05:05:54.678052 | 2026-02-15 05:05:54.678174 | TASK [Print next openstack version] 2026-02-15 05:05:54.747230 | orchestrator | ok 2026-02-15 05:05:54.758727 | 2026-02-15 05:05:54.758899 | TASK [Print next manager version] 2026-02-15 05:05:54.828252 | orchestrator | ok 2026-02-15 05:05:54.839211 | 2026-02-15 05:05:54.839349 | TASK [Set cloud fact (Zuul deployment)] 2026-02-15 05:05:54.887413 | orchestrator | ok 2026-02-15 05:05:54.903809 | 2026-02-15 05:05:54.903990 | TASK [Set cloud fact (local deployment)] 2026-02-15 05:05:54.931164 | orchestrator | skipping: Conditional result was False 2026-02-15 05:05:54.941380 | 2026-02-15 05:05:54.941511 | TASK [Fetch manager address] 2026-02-15 05:05:55.251485 | orchestrator | ok 2026-02-15 05:05:55.260182 | 2026-02-15 05:05:55.260321 | TASK [Set manager_host address] 2026-02-15 05:05:55.332110 | orchestrator | ok 2026-02-15 05:05:55.343545 | 2026-02-15 05:05:55.343683 | TASK [Run upgrade] 2026-02-15 05:05:56.095245 | orchestrator | + set -e 2026-02-15 05:05:56.095489 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-15 05:05:56.095527 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-15 05:05:56.095550 | orchestrator | + CEPH_VERSION=reef 2026-02-15 05:05:56.095564 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-15 05:05:56.095577 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-15 05:05:56.095601 | orchestrator | + sh -c '/opt/configuration/scripts/upgrade-manager.sh 10.0.0-rc.1 reef 2024.2 kolla/release' 2026-02-15 05:05:56.105829 | orchestrator | + set -e 2026-02-15 05:05:56.105882 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 05:05:56.105890 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 05:05:56.105899 | orchestrator | ++ INTERACTIVE=false 2026-02-15 05:05:56.105904 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 05:05:56.105912 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 05:05:56.107452 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "org.opencontainers.image.version"}}' osism-ansible 2026-02-15 05:05:56.149181 | orchestrator | + OLD_MANAGER_VERSION=v0.20251130.0 2026-02-15 05:05:56.150084 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-15 05:05:56.191038 | orchestrator | 2026-02-15 05:05:56.191130 | orchestrator | # UPGRADE MANAGER 2026-02-15 05:05:56.191144 | orchestrator | 2026-02-15 05:05:56.191152 | orchestrator | + OLD_OPENSTACK_VERSION=2024.2 2026-02-15 05:05:56.191161 | orchestrator | + echo 2026-02-15 05:05:56.191185 | orchestrator | + echo '# UPGRADE MANAGER' 2026-02-15 05:05:56.191194 | orchestrator | + echo 2026-02-15 05:05:56.191201 | orchestrator | + export MANAGER_VERSION=10.0.0-rc.1 2026-02-15 05:05:56.191210 | orchestrator | + MANAGER_VERSION=10.0.0-rc.1 2026-02-15 05:05:56.191216 | orchestrator | + CEPH_VERSION=reef 2026-02-15 05:05:56.191224 | orchestrator | + OPENSTACK_VERSION=2024.2 2026-02-15 05:05:56.191231 | orchestrator | + KOLLA_NAMESPACE=kolla/release 2026-02-15 05:05:56.191238 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 10.0.0-rc.1 2026-02-15 05:05:56.198744 | orchestrator | + set -e 2026-02-15 05:05:56.198864 | orchestrator | + VERSION=10.0.0-rc.1 2026-02-15 05:05:56.198881 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 10.0.0-rc.1/g' /opt/configuration/environments/manager/configuration.yml 2026-02-15 05:05:56.204656 | orchestrator | + [[ 10.0.0-rc.1 != \l\a\t\e\s\t ]] 2026-02-15 05:05:56.204729 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-15 05:05:56.209104 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2026-02-15 05:05:56.213134 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-15 05:05:56.220087 | orchestrator | /opt/configuration ~ 2026-02-15 05:05:56.220142 | orchestrator | + set -e 2026-02-15 05:05:56.220163 | orchestrator | + pushd /opt/configuration 2026-02-15 05:05:56.220210 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-15 05:05:56.220222 | orchestrator | + source /opt/venv/bin/activate 2026-02-15 05:05:56.221437 | orchestrator | ++ deactivate nondestructive 2026-02-15 05:05:56.221465 | orchestrator | ++ '[' -n '' ']' 2026-02-15 05:05:56.221476 | orchestrator | ++ '[' -n '' ']' 2026-02-15 05:05:56.221487 | orchestrator | ++ hash -r 2026-02-15 05:05:56.221497 | orchestrator | ++ '[' -n '' ']' 2026-02-15 05:05:56.221508 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-15 05:05:56.221519 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-15 05:05:56.221530 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-15 05:05:56.221542 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-15 05:05:56.221553 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-15 05:05:56.221564 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-15 05:05:56.221575 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-15 05:05:56.221594 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 05:05:56.221626 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 05:05:56.221639 | orchestrator | ++ export PATH 2026-02-15 05:05:56.221801 | orchestrator | ++ '[' -n '' ']' 2026-02-15 05:05:56.221820 | orchestrator | ++ '[' -z '' ']' 2026-02-15 05:05:56.221831 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-15 05:05:56.221842 | orchestrator | ++ PS1='(venv) ' 2026-02-15 05:05:56.221853 | orchestrator | ++ export PS1 2026-02-15 05:05:56.221864 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-15 05:05:56.221875 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-15 05:05:56.221886 | orchestrator | ++ hash -r 2026-02-15 05:05:56.221900 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-15 05:05:57.337868 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-15 05:05:57.339865 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-15 05:05:57.341854 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-15 05:05:57.343903 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-15 05:05:57.345694 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-15 05:05:57.357309 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-15 05:05:57.359019 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-15 05:05:57.360052 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-15 05:05:57.361533 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-15 05:05:57.395811 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-15 05:05:57.397499 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-15 05:05:57.399316 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-15 05:05:57.400938 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-15 05:05:57.404935 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-15 05:05:57.629537 | orchestrator | ++ which gilt 2026-02-15 05:05:57.630461 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-15 05:05:57.630502 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-15 05:05:57.875738 | orchestrator | osism.cfg-generics: 2026-02-15 05:05:57.982788 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-15 05:05:57.983681 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-15 05:05:57.985078 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-15 05:05:57.985101 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-15 05:05:58.861088 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-15 05:05:58.872819 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-15 05:05:59.358003 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-15 05:05:59.422893 | orchestrator | ~ 2026-02-15 05:05:59.422984 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-15 05:05:59.422993 | orchestrator | + deactivate 2026-02-15 05:05:59.422999 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-15 05:05:59.423006 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 05:05:59.423011 | orchestrator | + export PATH 2026-02-15 05:05:59.423016 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-15 05:05:59.423021 | orchestrator | + '[' -n '' ']' 2026-02-15 05:05:59.423026 | orchestrator | + hash -r 2026-02-15 05:05:59.423030 | orchestrator | + '[' -n '' ']' 2026-02-15 05:05:59.423035 | orchestrator | + unset VIRTUAL_ENV 2026-02-15 05:05:59.423039 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-15 05:05:59.423044 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-15 05:05:59.423049 | orchestrator | + unset -f deactivate 2026-02-15 05:05:59.423053 | orchestrator | + popd 2026-02-15 05:05:59.424069 | orchestrator | + [[ 10.0.0-rc.1 == \l\a\t\e\s\t ]] 2026-02-15 05:05:59.424199 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release 2026-02-15 05:05:59.429723 | orchestrator | + set -e 2026-02-15 05:05:59.429792 | orchestrator | + NAMESPACE=kolla/release 2026-02-15 05:05:59.429803 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-15 05:05:59.435569 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2026-02-15 05:05:59.439677 | orchestrator | /opt/configuration ~ 2026-02-15 05:05:59.439702 | orchestrator | + set -e 2026-02-15 05:05:59.439707 | orchestrator | + pushd /opt/configuration 2026-02-15 05:05:59.439712 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-15 05:05:59.439717 | orchestrator | + source /opt/venv/bin/activate 2026-02-15 05:05:59.439721 | orchestrator | ++ deactivate nondestructive 2026-02-15 05:05:59.439726 | orchestrator | ++ '[' -n '' ']' 2026-02-15 05:05:59.439730 | orchestrator | ++ '[' -n '' ']' 2026-02-15 05:05:59.439735 | orchestrator | ++ hash -r 2026-02-15 05:05:59.439739 | orchestrator | ++ '[' -n '' ']' 2026-02-15 05:05:59.439743 | orchestrator | ++ unset VIRTUAL_ENV 2026-02-15 05:05:59.439747 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-02-15 05:05:59.439751 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-02-15 05:05:59.439802 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-02-15 05:05:59.439810 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-02-15 05:05:59.439814 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-02-15 05:05:59.439822 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-02-15 05:05:59.439827 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 05:05:59.439834 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 05:05:59.439838 | orchestrator | ++ export PATH 2026-02-15 05:05:59.439842 | orchestrator | ++ '[' -n '' ']' 2026-02-15 05:05:59.439846 | orchestrator | ++ '[' -z '' ']' 2026-02-15 05:05:59.439850 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-02-15 05:05:59.439854 | orchestrator | ++ PS1='(venv) ' 2026-02-15 05:05:59.439858 | orchestrator | ++ export PS1 2026-02-15 05:05:59.439863 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-02-15 05:05:59.439867 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-02-15 05:05:59.439871 | orchestrator | ++ hash -r 2026-02-15 05:05:59.439875 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2026-02-15 05:05:59.958862 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2026-02-15 05:05:59.960211 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.5) 2026-02-15 05:05:59.961380 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2026-02-15 05:05:59.962754 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.3) 2026-02-15 05:05:59.964142 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (26.0) 2026-02-15 05:05:59.974957 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.3.1) 2026-02-15 05:05:59.976345 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2026-02-15 05:05:59.977331 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.20) 2026-02-15 05:05:59.978812 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2026-02-15 05:06:00.011491 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.4) 2026-02-15 05:06:00.012953 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.11) 2026-02-15 05:06:00.014760 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.6.3) 2026-02-15 05:06:00.016337 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2026.1.4) 2026-02-15 05:06:00.020468 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.3) 2026-02-15 05:06:00.260262 | orchestrator | ++ which gilt 2026-02-15 05:06:00.262391 | orchestrator | + GILT=/opt/venv/bin/gilt 2026-02-15 05:06:00.262435 | orchestrator | + /opt/venv/bin/gilt overlay 2026-02-15 05:06:00.465233 | orchestrator | osism.cfg-generics: 2026-02-15 05:06:00.531466 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2026-02-15 05:06:00.531597 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2026-02-15 05:06:00.531628 | orchestrator | - copied (v0.20251130.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2026-02-15 05:06:00.531643 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2026-02-15 05:06:01.123297 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2026-02-15 05:06:01.132040 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2026-02-15 05:06:01.470450 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2026-02-15 05:06:01.521934 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-02-15 05:06:01.522043 | orchestrator | + deactivate 2026-02-15 05:06:01.522073 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-02-15 05:06:01.522081 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-02-15 05:06:01.522086 | orchestrator | + export PATH 2026-02-15 05:06:01.522091 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-02-15 05:06:01.522097 | orchestrator | + '[' -n '' ']' 2026-02-15 05:06:01.522102 | orchestrator | + hash -r 2026-02-15 05:06:01.522107 | orchestrator | + '[' -n '' ']' 2026-02-15 05:06:01.522111 | orchestrator | + unset VIRTUAL_ENV 2026-02-15 05:06:01.522117 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-02-15 05:06:01.522122 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-02-15 05:06:01.522127 | orchestrator | + unset -f deactivate 2026-02-15 05:06:01.522141 | orchestrator | + popd 2026-02-15 05:06:01.522248 | orchestrator | ~ 2026-02-15 05:06:01.523643 | orchestrator | ++ semver v0.20251130.0 6.0.0 2026-02-15 05:06:01.576188 | orchestrator | + [[ -1 -ge 0 ]] 2026-02-15 05:06:01.577528 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-15 05:06:01.684143 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-15 05:06:01.684276 | orchestrator | + sed -i '/^om_enable_rabbitmq_high_availability:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-15 05:06:01.692050 | orchestrator | + sed -i '/^om_enable_rabbitmq_quorum_queues:/d' /opt/configuration/environments/kolla/configuration.yml 2026-02-15 05:06:01.699308 | orchestrator | +++ semver v0.20251130.0 9.5.0 2026-02-15 05:06:01.769651 | orchestrator | ++ '[' -1 -le 0 ']' 2026-02-15 05:06:01.770680 | orchestrator | +++ semver 10.0.0-rc.1 10.0.0-0 2026-02-15 05:06:01.874360 | orchestrator | ++ '[' 1 -ge 0 ']' 2026-02-15 05:06:01.874478 | orchestrator | ++ echo true 2026-02-15 05:06:01.874503 | orchestrator | + MANAGER_UPGRADE_CROSSES_10=true 2026-02-15 05:06:01.876279 | orchestrator | +++ semver 2024.2 2024.2 2026-02-15 05:06:01.951506 | orchestrator | ++ '[' 0 -le 0 ']' 2026-02-15 05:06:01.952366 | orchestrator | +++ semver 2024.2 2025.1 2026-02-15 05:06:02.012436 | orchestrator | ++ '[' -1 -ge 0 ']' 2026-02-15 05:06:02.012545 | orchestrator | ++ echo false 2026-02-15 05:06:02.012827 | orchestrator | + OPENSTACK_UPGRADE_CROSSES_2025=false 2026-02-15 05:06:02.012863 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-15 05:06:02.012881 | orchestrator | + echo 'om_rpc_vhost: openstack' 2026-02-15 05:06:02.013079 | orchestrator | + echo 'om_notify_vhost: openstack' 2026-02-15 05:06:02.013110 | orchestrator | + sed -i 's#manager_listener_broker_vhost: .*#manager_listener_broker_vhost: /openstack#g' /opt/configuration/environments/manager/configuration.yml 2026-02-15 05:06:02.018689 | orchestrator | + echo 'export RABBITMQ3TO4=true' 2026-02-15 05:06:02.019108 | orchestrator | + sudo tee -a /opt/manager-vars.sh 2026-02-15 05:06:02.041057 | orchestrator | export RABBITMQ3TO4=true 2026-02-15 05:06:02.043633 | orchestrator | + osism update manager 2026-02-15 05:06:07.840035 | orchestrator | Collecting uv 2026-02-15 05:06:07.954377 | orchestrator | Downloading uv-0.10.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (11 kB) 2026-02-15 05:06:07.978507 | orchestrator | Downloading uv-0.10.2-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (23.0 MB) 2026-02-15 05:06:08.767360 | orchestrator | ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.0/23.0 MB 32.2 MB/s eta 0:00:00 2026-02-15 05:06:08.823224 | orchestrator | Installing collected packages: uv 2026-02-15 05:06:09.270669 | orchestrator | Successfully installed uv-0.10.2 2026-02-15 05:06:10.281830 | orchestrator | Resolved 11 packages in 681ms 2026-02-15 05:06:10.312759 | orchestrator | Downloading netaddr (2.2MiB) 2026-02-15 05:06:10.313524 | orchestrator | Downloading cryptography (4.3MiB) 2026-02-15 05:06:10.313560 | orchestrator | Downloading ansible-core (2.1MiB) 2026-02-15 05:06:10.430648 | orchestrator | Downloading ansible (54.5MiB) 2026-02-15 05:06:10.686355 | orchestrator | Downloaded netaddr 2026-02-15 05:06:10.767866 | orchestrator | Downloaded cryptography 2026-02-15 05:06:10.900778 | orchestrator | Downloaded ansible-core 2026-02-15 05:06:19.805887 | orchestrator | Downloaded ansible 2026-02-15 05:06:19.806184 | orchestrator | Prepared 11 packages in 9.52s 2026-02-15 05:06:20.388235 | orchestrator | Installed 11 packages in 580ms 2026-02-15 05:06:20.388337 | orchestrator | + ansible==11.11.0 2026-02-15 05:06:20.388356 | orchestrator | + ansible-core==2.18.13 2026-02-15 05:06:20.388375 | orchestrator | + cffi==2.0.0 2026-02-15 05:06:20.388392 | orchestrator | + cryptography==46.0.5 2026-02-15 05:06:20.388411 | orchestrator | + jinja2==3.1.6 2026-02-15 05:06:20.388428 | orchestrator | + markupsafe==3.0.3 2026-02-15 05:06:20.388445 | orchestrator | + netaddr==1.3.0 2026-02-15 05:06:20.388462 | orchestrator | + packaging==26.0 2026-02-15 05:06:20.388472 | orchestrator | + pycparser==3.0 2026-02-15 05:06:20.388482 | orchestrator | + pyyaml==6.0.3 2026-02-15 05:06:20.388492 | orchestrator | + resolvelib==1.0.1 2026-02-15 05:06:21.504681 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-201927rmzq6cje/tmp8y8o9g3s/ansible-collection-servicesh2uh_cxd'... 2026-02-15 05:06:22.792546 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-15 05:06:22.792643 | orchestrator | Already on 'main' 2026-02-15 05:06:23.275948 | orchestrator | Starting galaxy collection install process 2026-02-15 05:06:23.276051 | orchestrator | Process install dependency map 2026-02-15 05:06:23.276066 | orchestrator | Starting collection install process 2026-02-15 05:06:23.276079 | orchestrator | Installing 'osism.services:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/services' 2026-02-15 05:06:23.276118 | orchestrator | Created collection for osism.services:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/services 2026-02-15 05:06:23.276130 | orchestrator | osism.services:999.0.0 was installed successfully 2026-02-15 05:06:23.782609 | orchestrator | Cloning into '/home/dragon/.ansible/tmp/ansible-local-201974l37259y4/tmpe9q03_0q/ansible-playbooks-managerdbzpzjj4'... 2026-02-15 05:06:24.373372 | orchestrator | Your branch is up to date with 'origin/main'. 2026-02-15 05:06:24.373439 | orchestrator | Already on 'main' 2026-02-15 05:06:24.667571 | orchestrator | Starting galaxy collection install process 2026-02-15 05:06:24.667672 | orchestrator | Process install dependency map 2026-02-15 05:06:24.667688 | orchestrator | Starting collection install process 2026-02-15 05:06:24.667700 | orchestrator | Installing 'osism.manager:999.0.0' to '/home/dragon/.ansible/collections/ansible_collections/osism/manager' 2026-02-15 05:06:24.667713 | orchestrator | Created collection for osism.manager:999.0.0 at /home/dragon/.ansible/collections/ansible_collections/osism/manager 2026-02-15 05:06:24.667725 | orchestrator | osism.manager:999.0.0 was installed successfully 2026-02-15 05:06:25.295510 | orchestrator | [WARNING]: Invalid characters were found in group names but not replaced, use 2026-02-15 05:06:25.295610 | orchestrator | -vvvv to see details 2026-02-15 05:06:25.762592 | orchestrator | 2026-02-15 05:06:25.762712 | orchestrator | PLAY [Apply role manager] ****************************************************** 2026-02-15 05:06:25.762731 | orchestrator | 2026-02-15 05:06:25.762743 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-02-15 05:06:29.909141 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:29.909245 | orchestrator | 2026-02-15 05:06:29.909261 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-02-15 05:06:29.972957 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-02-15 05:06:29.973048 | orchestrator | 2026-02-15 05:06:29.973113 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-02-15 05:06:31.768689 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:31.768791 | orchestrator | 2026-02-15 05:06:31.768807 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-02-15 05:06:31.822467 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:31.822536 | orchestrator | 2026-02-15 05:06:31.822544 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-02-15 05:06:31.886161 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-02-15 05:06:31.886285 | orchestrator | 2026-02-15 05:06:31.886301 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-02-15 05:06:36.144338 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible) 2026-02-15 05:06:36.144459 | orchestrator | ok: [testbed-manager] => (item=/opt/archive) 2026-02-15 05:06:36.144483 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/configuration) 2026-02-15 05:06:36.144518 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/data) 2026-02-15 05:06:36.144537 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-02-15 05:06:36.144555 | orchestrator | ok: [testbed-manager] => (item=/opt/manager/secrets) 2026-02-15 05:06:36.144571 | orchestrator | ok: [testbed-manager] => (item=/opt/ansible/secrets) 2026-02-15 05:06:36.144590 | orchestrator | ok: [testbed-manager] => (item=/opt/state) 2026-02-15 05:06:36.144609 | orchestrator | 2026-02-15 05:06:36.144630 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-02-15 05:06:37.199365 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:37.199487 | orchestrator | 2026-02-15 05:06:37.199518 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-02-15 05:06:38.208772 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:38.208872 | orchestrator | 2026-02-15 05:06:38.208889 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-02-15 05:06:38.298717 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-02-15 05:06:38.298809 | orchestrator | 2026-02-15 05:06:38.298822 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-02-15 05:06:40.192364 | orchestrator | ok: [testbed-manager] => (item=ara) 2026-02-15 05:06:40.192438 | orchestrator | ok: [testbed-manager] => (item=ara-server) 2026-02-15 05:06:40.192446 | orchestrator | 2026-02-15 05:06:40.192452 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-02-15 05:06:41.110106 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:41.110215 | orchestrator | 2026-02-15 05:06:41.110233 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-02-15 05:06:41.172967 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:06:41.173104 | orchestrator | 2026-02-15 05:06:41.173124 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-02-15 05:06:41.258550 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-02-15 05:06:41.258644 | orchestrator | 2026-02-15 05:06:41.258659 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-02-15 05:06:42.309458 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:42.309582 | orchestrator | 2026-02-15 05:06:42.309599 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-02-15 05:06:42.379950 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-02-15 05:06:42.380077 | orchestrator | 2026-02-15 05:06:42.380097 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-02-15 05:06:44.480924 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-15 05:06:44.481144 | orchestrator | ok: [testbed-manager] => (item=None) 2026-02-15 05:06:44.481166 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:44.481184 | orchestrator | 2026-02-15 05:06:44.481199 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-02-15 05:06:45.462400 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:45.462507 | orchestrator | 2026-02-15 05:06:45.462525 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-02-15 05:06:45.552409 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:06:45.552502 | orchestrator | 2026-02-15 05:06:45.552516 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-02-15 05:06:45.655347 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-02-15 05:06:45.655474 | orchestrator | 2026-02-15 05:06:45.655500 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-02-15 05:06:46.354473 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:46.354580 | orchestrator | 2026-02-15 05:06:46.354599 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-02-15 05:06:46.939090 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:46.939193 | orchestrator | 2026-02-15 05:06:46.939209 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-02-15 05:06:48.908370 | orchestrator | ok: [testbed-manager] => (item=conductor) 2026-02-15 05:06:48.908475 | orchestrator | ok: [testbed-manager] => (item=openstack) 2026-02-15 05:06:48.908491 | orchestrator | 2026-02-15 05:06:48.908504 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-02-15 05:06:50.116975 | orchestrator | changed: [testbed-manager] 2026-02-15 05:06:50.117142 | orchestrator | 2026-02-15 05:06:50.117159 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-02-15 05:06:50.661091 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:50.661175 | orchestrator | 2026-02-15 05:06:50.661186 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-02-15 05:06:51.200641 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:51.200743 | orchestrator | 2026-02-15 05:06:51.200842 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-02-15 05:06:51.258835 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:06:51.258918 | orchestrator | 2026-02-15 05:06:51.258926 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-02-15 05:06:51.342958 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-02-15 05:06:51.343082 | orchestrator | 2026-02-15 05:06:51.343099 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-02-15 05:06:51.406389 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:51.406497 | orchestrator | 2026-02-15 05:06:51.406517 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-02-15 05:06:54.423439 | orchestrator | ok: [testbed-manager] => (item=osism) 2026-02-15 05:06:54.423569 | orchestrator | ok: [testbed-manager] => (item=osism-update-docker) 2026-02-15 05:06:54.423590 | orchestrator | ok: [testbed-manager] => (item=osism-update-manager) 2026-02-15 05:06:54.423601 | orchestrator | 2026-02-15 05:06:54.423613 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-02-15 05:06:55.454177 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:55.454281 | orchestrator | 2026-02-15 05:06:55.454299 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-02-15 05:06:56.473346 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:56.473445 | orchestrator | 2026-02-15 05:06:56.473461 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-02-15 05:06:57.562603 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:57.562707 | orchestrator | 2026-02-15 05:06:57.562724 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-02-15 05:06:57.629367 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-02-15 05:06:57.629481 | orchestrator | 2026-02-15 05:06:57.629497 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-02-15 05:06:57.691151 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:57.691274 | orchestrator | 2026-02-15 05:06:57.691289 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-02-15 05:06:58.717752 | orchestrator | ok: [testbed-manager] => (item=osism-include) 2026-02-15 05:06:58.717846 | orchestrator | 2026-02-15 05:06:58.717861 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-02-15 05:06:58.817283 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-02-15 05:06:58.817393 | orchestrator | 2026-02-15 05:06:58.817409 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-02-15 05:06:59.841379 | orchestrator | ok: [testbed-manager] 2026-02-15 05:06:59.841488 | orchestrator | 2026-02-15 05:06:59.841505 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-02-15 05:07:00.981360 | orchestrator | ok: [testbed-manager] 2026-02-15 05:07:00.981477 | orchestrator | 2026-02-15 05:07:00.981493 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-02-15 05:07:01.071566 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:07:01.071660 | orchestrator | 2026-02-15 05:07:01.071674 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-02-15 05:07:01.143891 | orchestrator | ok: [testbed-manager] 2026-02-15 05:07:01.143970 | orchestrator | 2026-02-15 05:07:01.143980 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-02-15 05:07:02.458585 | orchestrator | changed: [testbed-manager] 2026-02-15 05:07:02.458688 | orchestrator | 2026-02-15 05:07:02.458704 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-02-15 05:08:13.982646 | orchestrator | changed: [testbed-manager] 2026-02-15 05:08:13.982764 | orchestrator | 2026-02-15 05:08:13.982782 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-02-15 05:08:15.228141 | orchestrator | ok: [testbed-manager] 2026-02-15 05:08:15.228237 | orchestrator | 2026-02-15 05:08:15.228252 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-02-15 05:08:15.292174 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:08:15.292261 | orchestrator | 2026-02-15 05:08:15.292270 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-02-15 05:08:16.088007 | orchestrator | ok: [testbed-manager] 2026-02-15 05:08:16.088116 | orchestrator | 2026-02-15 05:08:16.088132 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-02-15 05:08:16.161464 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:08:16.161588 | orchestrator | 2026-02-15 05:08:16.161615 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-15 05:08:16.161668 | orchestrator | 2026-02-15 05:08:16.161681 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-02-15 05:08:35.478157 | orchestrator | changed: [testbed-manager] 2026-02-15 05:08:35.478251 | orchestrator | 2026-02-15 05:08:35.478262 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-02-15 05:09:35.537912 | orchestrator | Pausing for 60 seconds 2026-02-15 05:09:35.537983 | orchestrator | changed: [testbed-manager] 2026-02-15 05:09:35.537990 | orchestrator | 2026-02-15 05:09:35.537995 | orchestrator | RUNNING HANDLER [osism.services.manager : Register that manager service was restarted] *** 2026-02-15 05:09:35.604503 | orchestrator | ok: [testbed-manager] 2026-02-15 05:09:35.604565 | orchestrator | 2026-02-15 05:09:35.604570 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-02-15 05:09:39.263910 | orchestrator | changed: [testbed-manager] 2026-02-15 05:09:39.263982 | orchestrator | 2026-02-15 05:09:39.263990 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-02-15 05:10:42.126094 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-02-15 05:10:42.126204 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-02-15 05:10:42.126217 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-02-15 05:10:42.126229 | orchestrator | changed: [testbed-manager] 2026-02-15 05:10:42.126240 | orchestrator | 2026-02-15 05:10:42.126251 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-02-15 05:10:53.573572 | orchestrator | changed: [testbed-manager] 2026-02-15 05:10:53.573697 | orchestrator | 2026-02-15 05:10:53.573716 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-02-15 05:10:53.664653 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-02-15 05:10:53.664781 | orchestrator | 2026-02-15 05:10:53.664797 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-02-15 05:10:53.664810 | orchestrator | 2026-02-15 05:10:53.664821 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-02-15 05:10:53.739799 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:10:53.739895 | orchestrator | 2026-02-15 05:10:53.739909 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-02-15 05:10:53.812726 | orchestrator | included: /home/dragon/.ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-02-15 05:10:53.812824 | orchestrator | 2026-02-15 05:10:53.812861 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-02-15 05:10:54.959732 | orchestrator | changed: [testbed-manager] 2026-02-15 05:10:54.959833 | orchestrator | 2026-02-15 05:10:54.959849 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-02-15 05:10:58.704556 | orchestrator | ok: [testbed-manager] 2026-02-15 05:10:58.704647 | orchestrator | 2026-02-15 05:10:58.704656 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-02-15 05:10:58.797995 | orchestrator | ok: [testbed-manager] => { 2026-02-15 05:10:58.798147 | orchestrator | "version_check_result.stdout_lines": [ 2026-02-15 05:10:58.798163 | orchestrator | "=== OSISM Container Version Check ===", 2026-02-15 05:10:58.798175 | orchestrator | "Checking running containers against expected versions...", 2026-02-15 05:10:58.798188 | orchestrator | "", 2026-02-15 05:10:58.798199 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-02-15 05:10:58.798211 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-15 05:10:58.798222 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798232 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:0.20251208.0", 2026-02-15 05:10:58.798243 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798254 | orchestrator | "", 2026-02-15 05:10:58.798265 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-02-15 05:10:58.798276 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-15 05:10:58.798287 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798297 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:0.20251208.0", 2026-02-15 05:10:58.798308 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798318 | orchestrator | "", 2026-02-15 05:10:58.798329 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-02-15 05:10:58.798340 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-15 05:10:58.798350 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798361 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:0.20251208.0", 2026-02-15 05:10:58.798371 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798382 | orchestrator | "", 2026-02-15 05:10:58.798392 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-02-15 05:10:58.798403 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-15 05:10:58.798414 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798424 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:0.20251208.0", 2026-02-15 05:10:58.798434 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798445 | orchestrator | "", 2026-02-15 05:10:58.798456 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-02-15 05:10:58.798467 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-15 05:10:58.798477 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798488 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:0.20251208.0", 2026-02-15 05:10:58.798524 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798537 | orchestrator | "", 2026-02-15 05:10:58.798549 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-02-15 05:10:58.798582 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.798595 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798608 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.798620 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798630 | orchestrator | "", 2026-02-15 05:10:58.798641 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-02-15 05:10:58.798652 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-15 05:10:58.798663 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798673 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-02-15 05:10:58.798684 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798695 | orchestrator | "", 2026-02-15 05:10:58.798705 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-02-15 05:10:58.798716 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-15 05:10:58.798727 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798747 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-02-15 05:10:58.798758 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798769 | orchestrator | "", 2026-02-15 05:10:58.798780 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-02-15 05:10:58.798791 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-15 05:10:58.798801 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798812 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:0.20251208.0", 2026-02-15 05:10:58.798823 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798833 | orchestrator | "", 2026-02-15 05:10:58.798849 | orchestrator | "Checking service: redis (Redis Cache)", 2026-02-15 05:10:58.798860 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-15 05:10:58.798872 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798883 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-02-15 05:10:58.798893 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798904 | orchestrator | "", 2026-02-15 05:10:58.798915 | orchestrator | "Checking service: api (OSISM API Service)", 2026-02-15 05:10:58.798925 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.798936 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.798947 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.798957 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.798968 | orchestrator | "", 2026-02-15 05:10:58.798979 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-02-15 05:10:58.798989 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.799000 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.799011 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.799021 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.799032 | orchestrator | "", 2026-02-15 05:10:58.799042 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-02-15 05:10:58.799053 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.799064 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.799074 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.799085 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.799095 | orchestrator | "", 2026-02-15 05:10:58.799106 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-02-15 05:10:58.799116 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.799127 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.799138 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.799168 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.799179 | orchestrator | "", 2026-02-15 05:10:58.799190 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-02-15 05:10:58.799201 | orchestrator | " Expected: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.799219 | orchestrator | " Enabled: true", 2026-02-15 05:10:58.799230 | orchestrator | " Running: registry.osism.tech/osism/osism:0.20251208.0", 2026-02-15 05:10:58.799241 | orchestrator | " Status: ✅ MATCH", 2026-02-15 05:10:58.799252 | orchestrator | "", 2026-02-15 05:10:58.799262 | orchestrator | "=== Summary ===", 2026-02-15 05:10:58.799273 | orchestrator | "Errors (version mismatches): 0", 2026-02-15 05:10:58.799284 | orchestrator | "Warnings (expected containers not running): 0", 2026-02-15 05:10:58.799295 | orchestrator | "", 2026-02-15 05:10:58.799306 | orchestrator | "✅ All running containers match expected versions!" 2026-02-15 05:10:58.799317 | orchestrator | ] 2026-02-15 05:10:58.799328 | orchestrator | } 2026-02-15 05:10:58.799339 | orchestrator | 2026-02-15 05:10:58.799350 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-02-15 05:10:58.873708 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:10:58.873812 | orchestrator | 2026-02-15 05:10:58.873827 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:10:58.873840 | orchestrator | testbed-manager : ok=51 changed=9 unreachable=0 failed=0 skipped=8 rescued=0 ignored=0 2026-02-15 05:10:58.873851 | orchestrator | 2026-02-15 05:11:11.428720 | orchestrator | 2026-02-15 05:11:11 | INFO  | Task 14ca3f4f-b792-4099-a469-d9d06912f7d7 (sync inventory) is running in background. Output coming soon. 2026-02-15 05:11:40.909144 | orchestrator | 2026-02-15 05:11:12 | INFO  | Starting group_vars file reorganization 2026-02-15 05:11:40.909260 | orchestrator | 2026-02-15 05:11:12 | INFO  | Moved 0 file(s) to their respective directories 2026-02-15 05:11:40.909275 | orchestrator | 2026-02-15 05:11:12 | INFO  | Group_vars file reorganization completed 2026-02-15 05:11:40.909306 | orchestrator | 2026-02-15 05:11:15 | INFO  | Starting variable preparation from inventory 2026-02-15 05:11:40.909318 | orchestrator | 2026-02-15 05:11:19 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-02-15 05:11:40.909329 | orchestrator | 2026-02-15 05:11:19 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-02-15 05:11:40.909340 | orchestrator | 2026-02-15 05:11:19 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-02-15 05:11:40.909351 | orchestrator | 2026-02-15 05:11:19 | INFO  | 3 file(s) written, 6 host(s) processed 2026-02-15 05:11:40.909362 | orchestrator | 2026-02-15 05:11:19 | INFO  | Variable preparation completed 2026-02-15 05:11:40.909373 | orchestrator | 2026-02-15 05:11:20 | INFO  | Starting inventory overwrite handling 2026-02-15 05:11:40.909384 | orchestrator | 2026-02-15 05:11:20 | INFO  | Handling group overwrites in 99-overwrite 2026-02-15 05:11:40.909394 | orchestrator | 2026-02-15 05:11:20 | INFO  | Removing group frr:children from 60-generic 2026-02-15 05:11:40.909405 | orchestrator | 2026-02-15 05:11:20 | INFO  | Removing group netbird:children from 50-infrastructure 2026-02-15 05:11:40.909415 | orchestrator | 2026-02-15 05:11:20 | INFO  | Removing group ceph-mds from 50-ceph 2026-02-15 05:11:40.909490 | orchestrator | 2026-02-15 05:11:20 | INFO  | Removing group ceph-rgw from 50-ceph 2026-02-15 05:11:40.909503 | orchestrator | 2026-02-15 05:11:20 | INFO  | Handling group overwrites in 20-roles 2026-02-15 05:11:40.909514 | orchestrator | 2026-02-15 05:11:20 | INFO  | Removing group k3s_node from 50-infrastructure 2026-02-15 05:11:40.909532 | orchestrator | 2026-02-15 05:11:20 | INFO  | Removed 5 group(s) in total 2026-02-15 05:11:40.909559 | orchestrator | 2026-02-15 05:11:20 | INFO  | Inventory overwrite handling completed 2026-02-15 05:11:40.909581 | orchestrator | 2026-02-15 05:11:22 | INFO  | Starting merge of inventory files 2026-02-15 05:11:40.909599 | orchestrator | 2026-02-15 05:11:22 | INFO  | Inventory files merged successfully 2026-02-15 05:11:40.909647 | orchestrator | 2026-02-15 05:11:27 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-02-15 05:11:40.909668 | orchestrator | 2026-02-15 05:11:39 | INFO  | Successfully wrote ClusterShell configuration 2026-02-15 05:11:41.227574 | orchestrator | + [[ '' == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-15 05:11:41.227678 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-02-15 05:11:41.227694 | orchestrator | + local max_attempts=60 2026-02-15 05:11:41.227708 | orchestrator | + local name=kolla-ansible 2026-02-15 05:11:41.227719 | orchestrator | + local attempt_num=1 2026-02-15 05:11:41.227729 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-02-15 05:11:41.260292 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-15 05:11:41.260384 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-02-15 05:11:41.260399 | orchestrator | + local max_attempts=60 2026-02-15 05:11:41.260410 | orchestrator | + local name=osism-ansible 2026-02-15 05:11:41.260421 | orchestrator | + local attempt_num=1 2026-02-15 05:11:41.260767 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-02-15 05:11:41.293177 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-02-15 05:11:41.293259 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-02-15 05:11:41.490848 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-02-15 05:11:41.490937 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20251208.0 "/entrypoint.sh osis…" ceph-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-15 05:11:41.490951 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20251208.0 "/entrypoint.sh osis…" kolla-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-15 05:11:41.490961 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" api 3 minutes ago Up 3 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-02-15 05:11:41.490976 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 hours ago Up 2 minutes (healthy) 8000/tcp 2026-02-15 05:11:41.490986 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" beat 3 minutes ago Up 3 minutes (healthy) 2026-02-15 05:11:41.490996 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" flower 3 minutes ago Up 3 minutes (healthy) 2026-02-15 05:11:41.491005 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20251208.0 "/sbin/tini -- /entr…" inventory_reconciler 3 minutes ago Up 2 minutes (healthy) 2026-02-15 05:11:41.491015 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" listener 3 minutes ago Restarting (0) 14 seconds ago 2026-02-15 05:11:41.491024 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 hours ago Up 3 minutes (healthy) 3306/tcp 2026-02-15 05:11:41.491034 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- osism…" openstack 3 minutes ago Up 3 minutes (healthy) 2026-02-15 05:11:41.491043 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 hours ago Up 3 minutes (healthy) 6379/tcp 2026-02-15 05:11:41.491052 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20251208.0 "/entrypoint.sh osis…" osism-ansible 3 minutes ago Up 2 minutes (healthy) 2026-02-15 05:11:41.491088 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:0.20251208.0 "docker-entrypoint.s…" frontend 3 minutes ago Up 3 minutes 192.168.16.5:3000->3000/tcp 2026-02-15 05:11:41.491098 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20251208.0 "/entrypoint.sh osis…" osism-kubernetes 3 minutes ago Up 2 minutes (healthy) 2026-02-15 05:11:41.491108 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20251208.0 "/sbin/tini -- sleep…" osismclient 3 minutes ago Up 3 minutes (healthy) 2026-02-15 05:11:41.499415 | orchestrator | + [[ '' == \t\r\u\e ]] 2026-02-15 05:11:41.499511 | orchestrator | + [[ '' == \f\a\l\s\e ]] 2026-02-15 05:11:41.499524 | orchestrator | + osism apply facts 2026-02-15 05:11:53.697319 | orchestrator | 2026-02-15 05:11:53 | INFO  | Task 33181c42-0ab6-4d0b-84ac-2382fbae8d8c (facts) was prepared for execution. 2026-02-15 05:11:53.697523 | orchestrator | 2026-02-15 05:11:53 | INFO  | It takes a moment until task 33181c42-0ab6-4d0b-84ac-2382fbae8d8c (facts) has been started and output is visible here. 2026-02-15 05:12:13.348049 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-15 05:12:13.348177 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-15 05:12:13.348204 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-15 05:12:13.348214 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-15 05:12:13.348233 | orchestrator | 2026-02-15 05:12:13.348244 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-02-15 05:12:13.348253 | orchestrator | 2026-02-15 05:12:13.348263 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-02-15 05:12:13.348273 | orchestrator | Sunday 15 February 2026 05:12:00 +0000 (0:00:02.082) 0:00:02.082 ******* 2026-02-15 05:12:13.348282 | orchestrator | ok: [testbed-manager] 2026-02-15 05:12:13.348293 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:12:13.348303 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:12:13.348312 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:12:13.348321 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:12:13.348331 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:12:13.348340 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:12:13.348350 | orchestrator | 2026-02-15 05:12:13.348359 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-02-15 05:12:13.348369 | orchestrator | Sunday 15 February 2026 05:12:02 +0000 (0:00:02.173) 0:00:04.256 ******* 2026-02-15 05:12:13.348435 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:12:13.348446 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:12:13.348474 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:12:13.348485 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:12:13.348499 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:12:13.348509 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:12:13.348518 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:12:13.348528 | orchestrator | 2026-02-15 05:12:13.348538 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-02-15 05:12:13.348548 | orchestrator | 2026-02-15 05:12:13.348557 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-02-15 05:12:13.348567 | orchestrator | Sunday 15 February 2026 05:12:04 +0000 (0:00:01.799) 0:00:06.055 ******* 2026-02-15 05:12:13.348577 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:12:13.348587 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:12:13.348598 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:12:13.348610 | orchestrator | ok: [testbed-manager] 2026-02-15 05:12:13.348645 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:12:13.348657 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:12:13.348667 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:12:13.348679 | orchestrator | 2026-02-15 05:12:13.348689 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-02-15 05:12:13.348700 | orchestrator | 2026-02-15 05:12:13.348711 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-02-15 05:12:13.348722 | orchestrator | Sunday 15 February 2026 05:12:11 +0000 (0:00:06.914) 0:00:12.970 ******* 2026-02-15 05:12:13.348732 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:12:13.348744 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:12:13.348755 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:12:13.348766 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:12:13.348777 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:12:13.348787 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:12:13.348797 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:12:13.348807 | orchestrator | 2026-02-15 05:12:13.348819 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:12:13.348830 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 05:12:13.348842 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 05:12:13.348853 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 05:12:13.348863 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 05:12:13.348874 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 05:12:13.348885 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 05:12:13.348897 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 05:12:13.348908 | orchestrator | 2026-02-15 05:12:13.348919 | orchestrator | 2026-02-15 05:12:13.348931 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:12:13.348942 | orchestrator | Sunday 15 February 2026 05:12:12 +0000 (0:00:01.692) 0:00:14.662 ******* 2026-02-15 05:12:13.348953 | orchestrator | =============================================================================== 2026-02-15 05:12:13.348962 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.91s 2026-02-15 05:12:13.348971 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 2.17s 2026-02-15 05:12:13.348980 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.80s 2026-02-15 05:12:13.348990 | orchestrator | Gather facts for all hosts ---------------------------------------------- 1.69s 2026-02-15 05:12:13.659167 | orchestrator | ++ semver 10.0.0-rc.1 10.0.0-0 2026-02-15 05:12:13.768126 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-15 05:12:13.768854 | orchestrator | ++ docker inspect --format '{{ index .Config.Labels "de.osism.release.openstack"}}' kolla-ansible 2026-02-15 05:12:13.814827 | orchestrator | + OPENSTACK_VERSION=2025.1 2026-02-15 05:12:13.814939 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla/release/2025.1 2026-02-15 05:12:13.822185 | orchestrator | + set -e 2026-02-15 05:12:13.822231 | orchestrator | + NAMESPACE=kolla/release/2025.1 2026-02-15 05:12:13.822246 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla/release/2025.1#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-02-15 05:12:13.832624 | orchestrator | + sh -c /opt/configuration/scripts/upgrade-services.sh 2026-02-15 05:12:13.841971 | orchestrator | 2026-02-15 05:12:13.842082 | orchestrator | # UPGRADE SERVICES 2026-02-15 05:12:13.842137 | orchestrator | 2026-02-15 05:12:13.842158 | orchestrator | + set -e 2026-02-15 05:12:13.842179 | orchestrator | + echo 2026-02-15 05:12:13.842196 | orchestrator | + echo '# UPGRADE SERVICES' 2026-02-15 05:12:13.842216 | orchestrator | + echo 2026-02-15 05:12:13.842234 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 05:12:13.843158 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 05:12:13.843192 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 05:12:13.843212 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 05:12:13.843231 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 05:12:13.843243 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 05:12:13.843256 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 05:12:13.843267 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 05:12:13.843278 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 05:12:13.843289 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 05:12:13.843299 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 05:12:13.843310 | orchestrator | ++ export ARA=false 2026-02-15 05:12:13.843320 | orchestrator | ++ ARA=false 2026-02-15 05:12:13.843332 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 05:12:13.843342 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 05:12:13.843353 | orchestrator | ++ export TEMPEST=false 2026-02-15 05:12:13.843363 | orchestrator | ++ TEMPEST=false 2026-02-15 05:12:13.843412 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 05:12:13.843426 | orchestrator | ++ IS_ZUUL=true 2026-02-15 05:12:13.843437 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 05:12:13.843448 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 05:12:13.843459 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 05:12:13.843469 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 05:12:13.843480 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 05:12:13.843490 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 05:12:13.843501 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 05:12:13.843511 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 05:12:13.843522 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 05:12:13.843533 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 05:12:13.843544 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-15 05:12:13.843554 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-15 05:12:13.843585 | orchestrator | + SKIP_OPENSTACK_UPGRADE=false 2026-02-15 05:12:13.843596 | orchestrator | + SKIP_CEPH_UPGRADE=false 2026-02-15 05:12:13.843607 | orchestrator | + sh -c /opt/configuration/scripts/pull-images.sh 2026-02-15 05:12:13.852951 | orchestrator | + set -e 2026-02-15 05:12:13.853049 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 05:12:13.853922 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 05:12:13.853969 | orchestrator | ++ INTERACTIVE=false 2026-02-15 05:12:13.853980 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 05:12:13.854004 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 05:12:13.854059 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 05:12:13.854428 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 05:12:13.854463 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 05:12:13.854475 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 05:12:13.854485 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 05:12:13.854496 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 05:12:13.854508 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 05:12:13.854518 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 05:12:13.854529 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 05:12:13.854539 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 05:12:13.854550 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 05:12:13.854561 | orchestrator | ++ export ARA=false 2026-02-15 05:12:13.854571 | orchestrator | ++ ARA=false 2026-02-15 05:12:13.854582 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 05:12:13.854592 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 05:12:13.854603 | orchestrator | ++ export TEMPEST=false 2026-02-15 05:12:13.854613 | orchestrator | ++ TEMPEST=false 2026-02-15 05:12:13.854624 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 05:12:13.854634 | orchestrator | ++ IS_ZUUL=true 2026-02-15 05:12:13.854646 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 05:12:13.854657 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 05:12:13.854668 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 05:12:13.854678 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 05:12:13.854689 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 05:12:13.854699 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 05:12:13.854709 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 05:12:13.854720 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 05:12:13.854730 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 05:12:13.854741 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 05:12:13.854772 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-15 05:12:13.854783 | orchestrator | 2026-02-15 05:12:13.854794 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-15 05:12:13.854804 | orchestrator | + echo 2026-02-15 05:12:13.854906 | orchestrator | # PULL IMAGES 2026-02-15 05:12:13.854923 | orchestrator | 2026-02-15 05:12:13.854934 | orchestrator | + echo '# PULL IMAGES' 2026-02-15 05:12:13.854945 | orchestrator | + echo 2026-02-15 05:12:13.856108 | orchestrator | ++ semver 9.5.0 7.0.0 2026-02-15 05:12:13.916166 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-15 05:12:13.916259 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-02-15 05:12:15.955314 | orchestrator | 2026-02-15 05:12:15 | INFO  | Trying to run play pull-images in environment custom 2026-02-15 05:12:26.039034 | orchestrator | 2026-02-15 05:12:26 | INFO  | Task a37a333b-dbc8-439e-bb67-16b5af015c2e (pull-images) was prepared for execution. 2026-02-15 05:12:26.039154 | orchestrator | 2026-02-15 05:12:26 | INFO  | Task a37a333b-dbc8-439e-bb67-16b5af015c2e is running in background. No more output. Check ARA for logs. 2026-02-15 05:12:26.361092 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/500-kubernetes.sh 2026-02-15 05:12:26.372528 | orchestrator | + set -e 2026-02-15 05:12:26.372671 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 05:12:26.372701 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 05:12:26.372724 | orchestrator | ++ INTERACTIVE=false 2026-02-15 05:12:26.372820 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 05:12:26.372836 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 05:12:26.372848 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-15 05:12:26.375135 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-15 05:12:26.387252 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-15 05:12:26.387327 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-15 05:12:26.388445 | orchestrator | ++ semver 10.0.0-rc.1 8.0.3 2026-02-15 05:12:26.442270 | orchestrator | + [[ 1 -ge 0 ]] 2026-02-15 05:12:26.442416 | orchestrator | + osism apply frr 2026-02-15 05:12:38.769672 | orchestrator | 2026-02-15 05:12:38 | INFO  | Task aa87362a-4a93-4142-8abb-8e42b48d1986 (frr) was prepared for execution. 2026-02-15 05:12:38.769829 | orchestrator | 2026-02-15 05:12:38 | INFO  | It takes a moment until task aa87362a-4a93-4142-8abb-8e42b48d1986 (frr) has been started and output is visible here. 2026-02-15 05:13:12.233168 | orchestrator | 2026-02-15 05:13:12.233350 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-02-15 05:13:12.233370 | orchestrator | 2026-02-15 05:13:12.233382 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-02-15 05:13:12.233395 | orchestrator | Sunday 15 February 2026 05:12:47 +0000 (0:00:04.683) 0:00:04.683 ******* 2026-02-15 05:13:12.233406 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-02-15 05:13:12.233419 | orchestrator | 2026-02-15 05:13:12.233430 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-02-15 05:13:12.233441 | orchestrator | Sunday 15 February 2026 05:12:49 +0000 (0:00:01.954) 0:00:06.637 ******* 2026-02-15 05:13:12.233453 | orchestrator | ok: [testbed-manager] 2026-02-15 05:13:12.233465 | orchestrator | 2026-02-15 05:13:12.233476 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-02-15 05:13:12.233487 | orchestrator | Sunday 15 February 2026 05:12:52 +0000 (0:00:02.376) 0:00:09.014 ******* 2026-02-15 05:13:12.233498 | orchestrator | ok: [testbed-manager] 2026-02-15 05:13:12.233509 | orchestrator | 2026-02-15 05:13:12.233520 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-02-15 05:13:12.233531 | orchestrator | Sunday 15 February 2026 05:12:55 +0000 (0:00:02.928) 0:00:11.943 ******* 2026-02-15 05:13:12.233542 | orchestrator | ok: [testbed-manager] 2026-02-15 05:13:12.233553 | orchestrator | 2026-02-15 05:13:12.233564 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-02-15 05:13:12.233575 | orchestrator | Sunday 15 February 2026 05:12:57 +0000 (0:00:01.917) 0:00:13.860 ******* 2026-02-15 05:13:12.233612 | orchestrator | ok: [testbed-manager] 2026-02-15 05:13:12.233623 | orchestrator | 2026-02-15 05:13:12.233634 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-02-15 05:13:12.233645 | orchestrator | Sunday 15 February 2026 05:12:59 +0000 (0:00:01.974) 0:00:15.834 ******* 2026-02-15 05:13:12.233655 | orchestrator | ok: [testbed-manager] 2026-02-15 05:13:12.233666 | orchestrator | 2026-02-15 05:13:12.233677 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-02-15 05:13:12.233688 | orchestrator | Sunday 15 February 2026 05:13:01 +0000 (0:00:02.429) 0:00:18.264 ******* 2026-02-15 05:13:12.233699 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:13:12.233711 | orchestrator | 2026-02-15 05:13:12.233722 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-02-15 05:13:12.233733 | orchestrator | Sunday 15 February 2026 05:13:02 +0000 (0:00:01.117) 0:00:19.382 ******* 2026-02-15 05:13:12.233743 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:13:12.233755 | orchestrator | 2026-02-15 05:13:12.233765 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-02-15 05:13:12.233776 | orchestrator | Sunday 15 February 2026 05:13:03 +0000 (0:00:01.141) 0:00:20.523 ******* 2026-02-15 05:13:12.233787 | orchestrator | ok: [testbed-manager] 2026-02-15 05:13:12.233797 | orchestrator | 2026-02-15 05:13:12.233808 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-02-15 05:13:12.233819 | orchestrator | Sunday 15 February 2026 05:13:05 +0000 (0:00:01.963) 0:00:22.486 ******* 2026-02-15 05:13:12.233830 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-02-15 05:13:12.233859 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-02-15 05:13:12.233871 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-02-15 05:13:12.233882 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-02-15 05:13:12.233893 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-02-15 05:13:12.233904 | orchestrator | ok: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-02-15 05:13:12.233915 | orchestrator | 2026-02-15 05:13:12.233926 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-02-15 05:13:12.233937 | orchestrator | Sunday 15 February 2026 05:13:09 +0000 (0:00:03.684) 0:00:26.171 ******* 2026-02-15 05:13:12.233947 | orchestrator | ok: [testbed-manager] 2026-02-15 05:13:12.233958 | orchestrator | 2026-02-15 05:13:12.233969 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:13:12.233980 | orchestrator | testbed-manager : ok=9  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 05:13:12.233991 | orchestrator | 2026-02-15 05:13:12.234001 | orchestrator | 2026-02-15 05:13:12.234012 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:13:12.234081 | orchestrator | Sunday 15 February 2026 05:13:11 +0000 (0:00:02.457) 0:00:28.628 ******* 2026-02-15 05:13:12.234093 | orchestrator | =============================================================================== 2026-02-15 05:13:12.234104 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 3.68s 2026-02-15 05:13:12.234115 | orchestrator | osism.services.frr : Install frr package -------------------------------- 2.93s 2026-02-15 05:13:12.234126 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 2.46s 2026-02-15 05:13:12.234137 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 2.43s 2026-02-15 05:13:12.234147 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 2.38s 2026-02-15 05:13:12.234158 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.97s 2026-02-15 05:13:12.234169 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.96s 2026-02-15 05:13:12.234189 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 1.96s 2026-02-15 05:13:12.234219 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.92s 2026-02-15 05:13:12.235022 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 1.14s 2026-02-15 05:13:12.235047 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 1.12s 2026-02-15 05:13:12.534833 | orchestrator | + osism apply kubernetes 2026-02-15 05:13:14.828719 | orchestrator | 2026-02-15 05:13:14 | INFO  | Task f6d7f8eb-b5f7-4b83-bcaf-da99ddb564cf (kubernetes) was prepared for execution. 2026-02-15 05:13:14.828820 | orchestrator | 2026-02-15 05:13:14 | INFO  | It takes a moment until task f6d7f8eb-b5f7-4b83-bcaf-da99ddb564cf (kubernetes) has been started and output is visible here. 2026-02-15 05:13:59.064947 | orchestrator | 2026-02-15 05:13:59.065050 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-02-15 05:13:59.065065 | orchestrator | 2026-02-15 05:13:59.065076 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-02-15 05:13:59.065086 | orchestrator | Sunday 15 February 2026 05:13:21 +0000 (0:00:01.907) 0:00:01.907 ******* 2026-02-15 05:13:59.065095 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:13:59.065105 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:13:59.065114 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:13:59.065122 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:13:59.065131 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:13:59.065139 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:13:59.065148 | orchestrator | 2026-02-15 05:13:59.065157 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-02-15 05:13:59.065165 | orchestrator | Sunday 15 February 2026 05:13:25 +0000 (0:00:04.491) 0:00:06.399 ******* 2026-02-15 05:13:59.065174 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.065184 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.065192 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:13:59.065201 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:13:59.065209 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:13:59.065218 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:13:59.065226 | orchestrator | 2026-02-15 05:13:59.065280 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-02-15 05:13:59.065290 | orchestrator | Sunday 15 February 2026 05:13:27 +0000 (0:00:01.910) 0:00:08.310 ******* 2026-02-15 05:13:59.065299 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.065309 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.065318 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:13:59.065326 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:13:59.065335 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:13:59.065343 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:13:59.065352 | orchestrator | 2026-02-15 05:13:59.065361 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-02-15 05:13:59.065370 | orchestrator | Sunday 15 February 2026 05:13:29 +0000 (0:00:01.960) 0:00:10.271 ******* 2026-02-15 05:13:59.065378 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:13:59.065387 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:13:59.065396 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:13:59.065404 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:13:59.065413 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:13:59.065421 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:13:59.065430 | orchestrator | 2026-02-15 05:13:59.065438 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-02-15 05:13:59.065447 | orchestrator | Sunday 15 February 2026 05:13:32 +0000 (0:00:02.743) 0:00:13.014 ******* 2026-02-15 05:13:59.065456 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:13:59.065464 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:13:59.065473 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:13:59.065482 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:13:59.065512 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:13:59.065522 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:13:59.065532 | orchestrator | 2026-02-15 05:13:59.065542 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-02-15 05:13:59.065552 | orchestrator | Sunday 15 February 2026 05:13:35 +0000 (0:00:02.973) 0:00:15.988 ******* 2026-02-15 05:13:59.065561 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:13:59.065571 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:13:59.065581 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:13:59.065591 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:13:59.065601 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:13:59.065612 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:13:59.065622 | orchestrator | 2026-02-15 05:13:59.065631 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-02-15 05:13:59.065641 | orchestrator | Sunday 15 February 2026 05:13:37 +0000 (0:00:02.242) 0:00:18.231 ******* 2026-02-15 05:13:59.065651 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.065661 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.065671 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:13:59.065681 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:13:59.065691 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:13:59.065701 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:13:59.065711 | orchestrator | 2026-02-15 05:13:59.065721 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-02-15 05:13:59.065731 | orchestrator | Sunday 15 February 2026 05:13:39 +0000 (0:00:02.043) 0:00:20.275 ******* 2026-02-15 05:13:59.065741 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.065752 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.065762 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:13:59.065772 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:13:59.065790 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:13:59.065800 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:13:59.065811 | orchestrator | 2026-02-15 05:13:59.065821 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-02-15 05:13:59.065831 | orchestrator | Sunday 15 February 2026 05:13:41 +0000 (0:00:01.843) 0:00:22.118 ******* 2026-02-15 05:13:59.065842 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 05:13:59.065850 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 05:13:59.065859 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.065868 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 05:13:59.065876 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 05:13:59.065885 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.065893 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 05:13:59.065902 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 05:13:59.065910 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:13:59.065919 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 05:13:59.065942 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 05:13:59.065961 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:13:59.065985 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 05:13:59.065995 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 05:13:59.066003 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:13:59.066062 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-02-15 05:13:59.066074 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-02-15 05:13:59.066083 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:13:59.066092 | orchestrator | 2026-02-15 05:13:59.066108 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-02-15 05:13:59.066117 | orchestrator | Sunday 15 February 2026 05:13:43 +0000 (0:00:01.853) 0:00:23.972 ******* 2026-02-15 05:13:59.066125 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.066143 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.066152 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:13:59.066161 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:13:59.066169 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:13:59.066178 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:13:59.066186 | orchestrator | 2026-02-15 05:13:59.066195 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-02-15 05:13:59.066205 | orchestrator | Sunday 15 February 2026 05:13:45 +0000 (0:00:02.314) 0:00:26.286 ******* 2026-02-15 05:13:59.066213 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:13:59.066222 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:13:59.066269 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:13:59.066278 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:13:59.066287 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:13:59.066295 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:13:59.066304 | orchestrator | 2026-02-15 05:13:59.066312 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-02-15 05:13:59.066321 | orchestrator | Sunday 15 February 2026 05:13:47 +0000 (0:00:01.944) 0:00:28.230 ******* 2026-02-15 05:13:59.066329 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:13:59.066338 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:13:59.066346 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:13:59.066355 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:13:59.066363 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:13:59.066371 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:13:59.066380 | orchestrator | 2026-02-15 05:13:59.066388 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-02-15 05:13:59.066397 | orchestrator | Sunday 15 February 2026 05:13:50 +0000 (0:00:02.689) 0:00:30.920 ******* 2026-02-15 05:13:59.066405 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.066414 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.066422 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:13:59.066431 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:13:59.066439 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:13:59.066448 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:13:59.066456 | orchestrator | 2026-02-15 05:13:59.066465 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-02-15 05:13:59.066474 | orchestrator | Sunday 15 February 2026 05:13:52 +0000 (0:00:02.042) 0:00:32.963 ******* 2026-02-15 05:13:59.066482 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.066491 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.066499 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:13:59.066507 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:13:59.066516 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:13:59.066524 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:13:59.066533 | orchestrator | 2026-02-15 05:13:59.066541 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-02-15 05:13:59.066552 | orchestrator | Sunday 15 February 2026 05:13:54 +0000 (0:00:02.328) 0:00:35.291 ******* 2026-02-15 05:13:59.066560 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.066572 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.066581 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:13:59.066590 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:13:59.066598 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:13:59.066607 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:13:59.066615 | orchestrator | 2026-02-15 05:13:59.066624 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-02-15 05:13:59.066632 | orchestrator | Sunday 15 February 2026 05:13:56 +0000 (0:00:01.816) 0:00:37.107 ******* 2026-02-15 05:13:59.066647 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-02-15 05:13:59.066656 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-02-15 05:13:59.066664 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.066673 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-02-15 05:13:59.066681 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-02-15 05:13:59.066690 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.066698 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-02-15 05:13:59.066706 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-02-15 05:13:59.066715 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:13:59.066723 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-02-15 05:13:59.066732 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-02-15 05:13:59.066740 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:13:59.066749 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-02-15 05:13:59.066757 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-02-15 05:13:59.066766 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:13:59.066774 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-02-15 05:13:59.066783 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-02-15 05:13:59.066791 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:13:59.066799 | orchestrator | 2026-02-15 05:13:59.066808 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-02-15 05:13:59.066817 | orchestrator | Sunday 15 February 2026 05:13:58 +0000 (0:00:01.957) 0:00:39.065 ******* 2026-02-15 05:13:59.066825 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:13:59.066834 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:13:59.066849 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:15:37.993939 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:15:37.994095 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:15:37.994130 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:15:37.994141 | orchestrator | 2026-02-15 05:15:37.994151 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-02-15 05:15:37.994160 | orchestrator | Sunday 15 February 2026 05:14:00 +0000 (0:00:01.840) 0:00:40.906 ******* 2026-02-15 05:15:37.994169 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:15:37.994177 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:15:37.994184 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:15:37.994192 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:15:37.994200 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:15:37.994207 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:15:37.994214 | orchestrator | 2026-02-15 05:15:37.994220 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-02-15 05:15:37.994226 | orchestrator | 2026-02-15 05:15:37.994233 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-02-15 05:15:37.994241 | orchestrator | Sunday 15 February 2026 05:14:03 +0000 (0:00:02.785) 0:00:43.691 ******* 2026-02-15 05:15:37.994247 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.994256 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.994281 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.994289 | orchestrator | 2026-02-15 05:15:37.994299 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-02-15 05:15:37.994306 | orchestrator | Sunday 15 February 2026 05:14:04 +0000 (0:00:01.782) 0:00:45.474 ******* 2026-02-15 05:15:37.994313 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.994321 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.994328 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.994335 | orchestrator | 2026-02-15 05:15:37.994342 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-02-15 05:15:37.994349 | orchestrator | Sunday 15 February 2026 05:14:07 +0000 (0:00:02.130) 0:00:47.604 ******* 2026-02-15 05:15:37.994375 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:15:37.994382 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:15:37.994390 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:15:37.994397 | orchestrator | 2026-02-15 05:15:37.994404 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-02-15 05:15:37.994412 | orchestrator | Sunday 15 February 2026 05:14:09 +0000 (0:00:02.619) 0:00:50.224 ******* 2026-02-15 05:15:37.994419 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.994425 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.994432 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.994439 | orchestrator | 2026-02-15 05:15:37.994446 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-02-15 05:15:37.994452 | orchestrator | Sunday 15 February 2026 05:14:11 +0000 (0:00:01.949) 0:00:52.174 ******* 2026-02-15 05:15:37.994459 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:15:37.994466 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:15:37.994473 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:15:37.994479 | orchestrator | 2026-02-15 05:15:37.994486 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-02-15 05:15:37.994493 | orchestrator | Sunday 15 February 2026 05:14:13 +0000 (0:00:01.398) 0:00:53.572 ******* 2026-02-15 05:15:37.994499 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.994508 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.994516 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.994524 | orchestrator | 2026-02-15 05:15:37.994531 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-02-15 05:15:37.994539 | orchestrator | Sunday 15 February 2026 05:14:14 +0000 (0:00:01.693) 0:00:55.266 ******* 2026-02-15 05:15:37.994546 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.994552 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.994558 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.994565 | orchestrator | 2026-02-15 05:15:37.994571 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-02-15 05:15:37.994578 | orchestrator | Sunday 15 February 2026 05:14:16 +0000 (0:00:02.224) 0:00:57.491 ******* 2026-02-15 05:15:37.994585 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:15:37.994592 | orchestrator | 2026-02-15 05:15:37.994598 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-02-15 05:15:37.994605 | orchestrator | Sunday 15 February 2026 05:14:18 +0000 (0:00:01.968) 0:00:59.460 ******* 2026-02-15 05:15:37.994613 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.994620 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.994627 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.994633 | orchestrator | 2026-02-15 05:15:37.994639 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-02-15 05:15:37.994646 | orchestrator | Sunday 15 February 2026 05:14:21 +0000 (0:00:02.480) 0:01:01.940 ******* 2026-02-15 05:15:37.994654 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:15:37.994662 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.994670 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:15:37.994677 | orchestrator | 2026-02-15 05:15:37.994684 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-02-15 05:15:37.994691 | orchestrator | Sunday 15 February 2026 05:14:23 +0000 (0:00:01.707) 0:01:03.648 ******* 2026-02-15 05:15:37.994699 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:15:37.994706 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:15:37.994715 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:15:37.994723 | orchestrator | 2026-02-15 05:15:37.994731 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-02-15 05:15:37.994739 | orchestrator | Sunday 15 February 2026 05:14:25 +0000 (0:00:01.893) 0:01:05.541 ******* 2026-02-15 05:15:37.994748 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:15:37.994756 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:15:37.994764 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:15:37.994782 | orchestrator | 2026-02-15 05:15:37.994790 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-02-15 05:15:37.994798 | orchestrator | Sunday 15 February 2026 05:14:27 +0000 (0:00:02.442) 0:01:07.984 ******* 2026-02-15 05:15:37.994805 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:15:37.994813 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:15:37.994837 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:15:37.994845 | orchestrator | 2026-02-15 05:15:37.994853 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-02-15 05:15:37.994861 | orchestrator | Sunday 15 February 2026 05:14:28 +0000 (0:00:01.417) 0:01:09.401 ******* 2026-02-15 05:15:37.994869 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:15:37.994877 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:15:37.994885 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:15:37.994891 | orchestrator | 2026-02-15 05:15:37.994898 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-02-15 05:15:37.994904 | orchestrator | Sunday 15 February 2026 05:14:30 +0000 (0:00:01.642) 0:01:11.044 ******* 2026-02-15 05:15:37.994911 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:15:37.994918 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:15:37.994924 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:15:37.994931 | orchestrator | 2026-02-15 05:15:37.994939 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-02-15 05:15:37.994947 | orchestrator | Sunday 15 February 2026 05:14:32 +0000 (0:00:02.191) 0:01:13.236 ******* 2026-02-15 05:15:37.994955 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.994962 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.994969 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.994977 | orchestrator | 2026-02-15 05:15:37.994984 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-02-15 05:15:37.994992 | orchestrator | Sunday 15 February 2026 05:14:34 +0000 (0:00:01.901) 0:01:15.137 ******* 2026-02-15 05:15:37.995000 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.995008 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.995015 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.995021 | orchestrator | 2026-02-15 05:15:37.995029 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-02-15 05:15:37.995035 | orchestrator | Sunday 15 February 2026 05:14:36 +0000 (0:00:01.436) 0:01:16.574 ******* 2026-02-15 05:15:37.995041 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-15 05:15:37.995050 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-15 05:15:37.995057 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-02-15 05:15:37.995063 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-15 05:15:37.995070 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-15 05:15:37.995077 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-02-15 05:15:37.995085 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.995091 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.995098 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.995106 | orchestrator | 2026-02-15 05:15:37.995166 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-02-15 05:15:37.995175 | orchestrator | Sunday 15 February 2026 05:14:59 +0000 (0:00:23.386) 0:01:39.961 ******* 2026-02-15 05:15:37.995183 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:15:37.995190 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:15:37.995209 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:15:37.995216 | orchestrator | 2026-02-15 05:15:37.995222 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-02-15 05:15:37.995228 | orchestrator | Sunday 15 February 2026 05:15:00 +0000 (0:00:01.409) 0:01:41.371 ******* 2026-02-15 05:15:37.995234 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:15:37.995240 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:15:37.995246 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:15:37.995252 | orchestrator | 2026-02-15 05:15:37.995258 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-02-15 05:15:37.995265 | orchestrator | Sunday 15 February 2026 05:15:03 +0000 (0:00:02.173) 0:01:43.544 ******* 2026-02-15 05:15:37.995272 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.995279 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.995286 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.995293 | orchestrator | 2026-02-15 05:15:37.995300 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-02-15 05:15:37.995307 | orchestrator | Sunday 15 February 2026 05:15:05 +0000 (0:00:02.324) 0:01:45.868 ******* 2026-02-15 05:15:37.995315 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:15:37.995321 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:15:37.995328 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:15:37.995335 | orchestrator | 2026-02-15 05:15:37.995341 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-02-15 05:15:37.995348 | orchestrator | Sunday 15 February 2026 05:15:32 +0000 (0:00:27.168) 0:02:13.037 ******* 2026-02-15 05:15:37.995354 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.995361 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.995368 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.995375 | orchestrator | 2026-02-15 05:15:37.995390 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-02-15 05:15:37.995397 | orchestrator | Sunday 15 February 2026 05:15:34 +0000 (0:00:01.731) 0:02:14.768 ******* 2026-02-15 05:15:37.995404 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:15:37.995411 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:15:37.995418 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:15:37.995425 | orchestrator | 2026-02-15 05:15:37.995432 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-02-15 05:15:37.995439 | orchestrator | Sunday 15 February 2026 05:15:35 +0000 (0:00:01.686) 0:02:16.454 ******* 2026-02-15 05:15:37.995446 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:15:37.995453 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:15:37.995460 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:15:37.995467 | orchestrator | 2026-02-15 05:15:37.995484 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-02-15 05:16:27.688843 | orchestrator | Sunday 15 February 2026 05:15:37 +0000 (0:00:02.022) 0:02:18.477 ******* 2026-02-15 05:16:27.688964 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:16:27.688982 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:16:27.688994 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:16:27.689005 | orchestrator | 2026-02-15 05:16:27.689017 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-02-15 05:16:27.689029 | orchestrator | Sunday 15 February 2026 05:15:39 +0000 (0:00:01.797) 0:02:20.274 ******* 2026-02-15 05:16:27.689040 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:16:27.689051 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:16:27.689103 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:16:27.689114 | orchestrator | 2026-02-15 05:16:27.689125 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-02-15 05:16:27.689137 | orchestrator | Sunday 15 February 2026 05:15:41 +0000 (0:00:01.561) 0:02:21.836 ******* 2026-02-15 05:16:27.689148 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:16:27.689161 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:16:27.689172 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:16:27.689183 | orchestrator | 2026-02-15 05:16:27.689195 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-02-15 05:16:27.689231 | orchestrator | Sunday 15 February 2026 05:15:43 +0000 (0:00:01.820) 0:02:23.656 ******* 2026-02-15 05:16:27.689257 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:16:27.689268 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:16:27.689279 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:16:27.689290 | orchestrator | 2026-02-15 05:16:27.689300 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-02-15 05:16:27.689311 | orchestrator | Sunday 15 February 2026 05:15:45 +0000 (0:00:02.210) 0:02:25.867 ******* 2026-02-15 05:16:27.689322 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:16:27.689333 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:16:27.689344 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:16:27.689354 | orchestrator | 2026-02-15 05:16:27.689365 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-02-15 05:16:27.689378 | orchestrator | Sunday 15 February 2026 05:15:47 +0000 (0:00:01.736) 0:02:27.603 ******* 2026-02-15 05:16:27.689390 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:16:27.689404 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:16:27.689417 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:16:27.689429 | orchestrator | 2026-02-15 05:16:27.689442 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-02-15 05:16:27.689455 | orchestrator | Sunday 15 February 2026 05:15:49 +0000 (0:00:02.003) 0:02:29.606 ******* 2026-02-15 05:16:27.689467 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:16:27.689479 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:16:27.689492 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:16:27.689504 | orchestrator | 2026-02-15 05:16:27.689516 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-02-15 05:16:27.689529 | orchestrator | Sunday 15 February 2026 05:15:50 +0000 (0:00:01.405) 0:02:31.012 ******* 2026-02-15 05:16:27.689542 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:16:27.689555 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:16:27.689568 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:16:27.689580 | orchestrator | 2026-02-15 05:16:27.689593 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-02-15 05:16:27.689605 | orchestrator | Sunday 15 February 2026 05:15:51 +0000 (0:00:01.368) 0:02:32.381 ******* 2026-02-15 05:16:27.689617 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:16:27.689630 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:16:27.689642 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:16:27.689655 | orchestrator | 2026-02-15 05:16:27.689667 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-02-15 05:16:27.689680 | orchestrator | Sunday 15 February 2026 05:15:53 +0000 (0:00:01.754) 0:02:34.136 ******* 2026-02-15 05:16:27.689693 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:16:27.689706 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:16:27.689716 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:16:27.689727 | orchestrator | 2026-02-15 05:16:27.689738 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-02-15 05:16:27.689751 | orchestrator | Sunday 15 February 2026 05:15:55 +0000 (0:00:01.712) 0:02:35.849 ******* 2026-02-15 05:16:27.689762 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-15 05:16:27.689773 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-15 05:16:27.689784 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-15 05:16:27.689794 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-15 05:16:27.689805 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-02-15 05:16:27.689816 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-15 05:16:27.689835 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-15 05:16:27.689846 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-02-15 05:16:27.689857 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-02-15 05:16:27.689867 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-15 05:16:27.689878 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-02-15 05:16:27.689889 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-15 05:16:27.689917 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-02-15 05:16:27.689928 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-15 05:16:27.689939 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-15 05:16:27.689950 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-02-15 05:16:27.689961 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-15 05:16:27.689971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-02-15 05:16:27.689982 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-15 05:16:27.689993 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-02-15 05:16:27.690003 | orchestrator | 2026-02-15 05:16:27.690088 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-02-15 05:16:27.690101 | orchestrator | 2026-02-15 05:16:27.690112 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-02-15 05:16:27.690123 | orchestrator | Sunday 15 February 2026 05:15:59 +0000 (0:00:04.538) 0:02:40.388 ******* 2026-02-15 05:16:27.690134 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:16:27.690145 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:16:27.690156 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:16:27.690166 | orchestrator | 2026-02-15 05:16:27.690177 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-02-15 05:16:27.690188 | orchestrator | Sunday 15 February 2026 05:16:01 +0000 (0:00:01.419) 0:02:41.807 ******* 2026-02-15 05:16:27.690199 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:16:27.690210 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:16:27.690220 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:16:27.690231 | orchestrator | 2026-02-15 05:16:27.690241 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-02-15 05:16:27.690252 | orchestrator | Sunday 15 February 2026 05:16:03 +0000 (0:00:01.736) 0:02:43.543 ******* 2026-02-15 05:16:27.690263 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:16:27.690273 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:16:27.690284 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:16:27.690295 | orchestrator | 2026-02-15 05:16:27.690306 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-02-15 05:16:27.690316 | orchestrator | Sunday 15 February 2026 05:16:04 +0000 (0:00:01.656) 0:02:45.200 ******* 2026-02-15 05:16:27.690327 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 05:16:27.690338 | orchestrator | 2026-02-15 05:16:27.690349 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-02-15 05:16:27.690360 | orchestrator | Sunday 15 February 2026 05:16:06 +0000 (0:00:01.800) 0:02:47.001 ******* 2026-02-15 05:16:27.690370 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:16:27.690381 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:16:27.690392 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:16:27.690410 | orchestrator | 2026-02-15 05:16:27.690421 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-02-15 05:16:27.690432 | orchestrator | Sunday 15 February 2026 05:16:07 +0000 (0:00:01.415) 0:02:48.417 ******* 2026-02-15 05:16:27.690443 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:16:27.690453 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:16:27.690464 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:16:27.690474 | orchestrator | 2026-02-15 05:16:27.690485 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-02-15 05:16:27.690496 | orchestrator | Sunday 15 February 2026 05:16:09 +0000 (0:00:01.390) 0:02:49.807 ******* 2026-02-15 05:16:27.690506 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:16:27.690517 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:16:27.690528 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:16:27.690539 | orchestrator | 2026-02-15 05:16:27.690549 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-02-15 05:16:27.690648 | orchestrator | Sunday 15 February 2026 05:16:10 +0000 (0:00:01.505) 0:02:51.313 ******* 2026-02-15 05:16:27.690661 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:16:27.690672 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:16:27.690682 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:16:27.690693 | orchestrator | 2026-02-15 05:16:27.690704 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-02-15 05:16:27.690723 | orchestrator | Sunday 15 February 2026 05:16:12 +0000 (0:00:01.721) 0:02:53.035 ******* 2026-02-15 05:16:27.690735 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:16:27.690745 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:16:27.690786 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:16:27.690797 | orchestrator | 2026-02-15 05:16:27.690808 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-02-15 05:16:27.690819 | orchestrator | Sunday 15 February 2026 05:16:14 +0000 (0:00:02.457) 0:02:55.493 ******* 2026-02-15 05:16:27.690830 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:16:27.690840 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:16:27.690851 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:16:27.690889 | orchestrator | 2026-02-15 05:16:27.690902 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-02-15 05:16:27.690913 | orchestrator | Sunday 15 February 2026 05:16:17 +0000 (0:00:02.318) 0:02:57.811 ******* 2026-02-15 05:16:27.690924 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:16:27.690934 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:16:27.690945 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:16:27.690956 | orchestrator | 2026-02-15 05:16:27.690967 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-02-15 05:16:27.690977 | orchestrator | 2026-02-15 05:16:27.690988 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-02-15 05:16:27.690999 | orchestrator | Sunday 15 February 2026 05:16:25 +0000 (0:00:08.174) 0:03:05.985 ******* 2026-02-15 05:16:27.691010 | orchestrator | ok: [testbed-manager] 2026-02-15 05:16:27.691021 | orchestrator | 2026-02-15 05:16:27.691032 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-02-15 05:16:27.691053 | orchestrator | Sunday 15 February 2026 05:16:27 +0000 (0:00:02.188) 0:03:08.174 ******* 2026-02-15 05:17:37.151726 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.151831 | orchestrator | 2026-02-15 05:17:37.151845 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-02-15 05:17:37.151856 | orchestrator | Sunday 15 February 2026 05:16:29 +0000 (0:00:01.467) 0:03:09.641 ******* 2026-02-15 05:17:37.151866 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-15 05:17:37.151875 | orchestrator | 2026-02-15 05:17:37.151884 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-02-15 05:17:37.151893 | orchestrator | Sunday 15 February 2026 05:16:30 +0000 (0:00:01.568) 0:03:11.210 ******* 2026-02-15 05:17:37.151902 | orchestrator | changed: [testbed-manager] 2026-02-15 05:17:37.151930 | orchestrator | 2026-02-15 05:17:37.151940 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-02-15 05:17:37.151949 | orchestrator | Sunday 15 February 2026 05:16:32 +0000 (0:00:01.966) 0:03:13.176 ******* 2026-02-15 05:17:37.151957 | orchestrator | changed: [testbed-manager] 2026-02-15 05:17:37.151966 | orchestrator | 2026-02-15 05:17:37.151975 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-02-15 05:17:37.152040 | orchestrator | Sunday 15 February 2026 05:16:34 +0000 (0:00:01.553) 0:03:14.729 ******* 2026-02-15 05:17:37.152051 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-15 05:17:37.152059 | orchestrator | 2026-02-15 05:17:37.152068 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-02-15 05:17:37.152076 | orchestrator | Sunday 15 February 2026 05:16:37 +0000 (0:00:02.939) 0:03:17.669 ******* 2026-02-15 05:17:37.152084 | orchestrator | changed: [testbed-manager -> localhost] 2026-02-15 05:17:37.152093 | orchestrator | 2026-02-15 05:17:37.152101 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-02-15 05:17:37.152110 | orchestrator | Sunday 15 February 2026 05:16:39 +0000 (0:00:01.865) 0:03:19.534 ******* 2026-02-15 05:17:37.152118 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.152127 | orchestrator | 2026-02-15 05:17:37.152136 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-02-15 05:17:37.152144 | orchestrator | Sunday 15 February 2026 05:16:40 +0000 (0:00:01.444) 0:03:20.979 ******* 2026-02-15 05:17:37.152153 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.152161 | orchestrator | 2026-02-15 05:17:37.152170 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-02-15 05:17:37.152179 | orchestrator | 2026-02-15 05:17:37.152187 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-02-15 05:17:37.152195 | orchestrator | Sunday 15 February 2026 05:16:42 +0000 (0:00:01.573) 0:03:22.553 ******* 2026-02-15 05:17:37.152204 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.152212 | orchestrator | 2026-02-15 05:17:37.152221 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-02-15 05:17:37.152229 | orchestrator | Sunday 15 February 2026 05:16:43 +0000 (0:00:01.144) 0:03:23.697 ******* 2026-02-15 05:17:37.152238 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-02-15 05:17:37.152247 | orchestrator | 2026-02-15 05:17:37.152255 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-02-15 05:17:37.152264 | orchestrator | Sunday 15 February 2026 05:16:44 +0000 (0:00:01.543) 0:03:25.241 ******* 2026-02-15 05:17:37.152272 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.152282 | orchestrator | 2026-02-15 05:17:37.152292 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-02-15 05:17:37.152302 | orchestrator | Sunday 15 February 2026 05:16:46 +0000 (0:00:01.887) 0:03:27.129 ******* 2026-02-15 05:17:37.152311 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.152321 | orchestrator | 2026-02-15 05:17:37.152331 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-02-15 05:17:37.152341 | orchestrator | Sunday 15 February 2026 05:16:49 +0000 (0:00:02.777) 0:03:29.906 ******* 2026-02-15 05:17:37.152350 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.152360 | orchestrator | 2026-02-15 05:17:37.152370 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-02-15 05:17:37.152379 | orchestrator | Sunday 15 February 2026 05:16:50 +0000 (0:00:01.513) 0:03:31.420 ******* 2026-02-15 05:17:37.152389 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.152399 | orchestrator | 2026-02-15 05:17:37.152409 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-02-15 05:17:37.152420 | orchestrator | Sunday 15 February 2026 05:16:52 +0000 (0:00:01.454) 0:03:32.874 ******* 2026-02-15 05:17:37.152429 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.152439 | orchestrator | 2026-02-15 05:17:37.152449 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-02-15 05:17:37.152467 | orchestrator | Sunday 15 February 2026 05:16:54 +0000 (0:00:01.629) 0:03:34.504 ******* 2026-02-15 05:17:37.152476 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.152486 | orchestrator | 2026-02-15 05:17:37.152496 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-02-15 05:17:37.152506 | orchestrator | Sunday 15 February 2026 05:16:56 +0000 (0:00:02.488) 0:03:36.992 ******* 2026-02-15 05:17:37.152516 | orchestrator | ok: [testbed-manager] 2026-02-15 05:17:37.152525 | orchestrator | 2026-02-15 05:17:37.152535 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-02-15 05:17:37.152545 | orchestrator | 2026-02-15 05:17:37.152555 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-02-15 05:17:37.152565 | orchestrator | Sunday 15 February 2026 05:16:58 +0000 (0:00:01.694) 0:03:38.687 ******* 2026-02-15 05:17:37.152575 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:17:37.152585 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:17:37.152594 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:17:37.152604 | orchestrator | 2026-02-15 05:17:37.152613 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-02-15 05:17:37.152623 | orchestrator | Sunday 15 February 2026 05:16:59 +0000 (0:00:01.358) 0:03:40.046 ******* 2026-02-15 05:17:37.152633 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:17:37.152644 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:17:37.152653 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:17:37.152662 | orchestrator | 2026-02-15 05:17:37.152685 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-02-15 05:17:37.152694 | orchestrator | Sunday 15 February 2026 05:17:01 +0000 (0:00:01.639) 0:03:41.685 ******* 2026-02-15 05:17:37.152703 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:17:37.152712 | orchestrator | 2026-02-15 05:17:37.152720 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-02-15 05:17:37.152729 | orchestrator | Sunday 15 February 2026 05:17:02 +0000 (0:00:01.710) 0:03:43.396 ******* 2026-02-15 05:17:37.152737 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-15 05:17:37.152745 | orchestrator | 2026-02-15 05:17:37.152754 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-02-15 05:17:37.152762 | orchestrator | Sunday 15 February 2026 05:17:04 +0000 (0:00:01.887) 0:03:45.284 ******* 2026-02-15 05:17:37.152771 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 05:17:37.152779 | orchestrator | 2026-02-15 05:17:37.152788 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-02-15 05:17:37.152796 | orchestrator | Sunday 15 February 2026 05:17:06 +0000 (0:00:01.935) 0:03:47.219 ******* 2026-02-15 05:17:37.152805 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:17:37.152813 | orchestrator | 2026-02-15 05:17:37.152822 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-02-15 05:17:37.152830 | orchestrator | Sunday 15 February 2026 05:17:07 +0000 (0:00:01.120) 0:03:48.340 ******* 2026-02-15 05:17:37.152839 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 05:17:37.152847 | orchestrator | 2026-02-15 05:17:37.152856 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-02-15 05:17:37.152864 | orchestrator | Sunday 15 February 2026 05:17:09 +0000 (0:00:02.028) 0:03:50.369 ******* 2026-02-15 05:17:37.152873 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 05:17:37.152881 | orchestrator | 2026-02-15 05:17:37.152890 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-02-15 05:17:37.152898 | orchestrator | Sunday 15 February 2026 05:17:12 +0000 (0:00:02.159) 0:03:52.528 ******* 2026-02-15 05:17:37.152906 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 05:17:37.152915 | orchestrator | 2026-02-15 05:17:37.152923 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-02-15 05:17:37.152932 | orchestrator | Sunday 15 February 2026 05:17:13 +0000 (0:00:01.189) 0:03:53.718 ******* 2026-02-15 05:17:37.152946 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 05:17:37.152955 | orchestrator | 2026-02-15 05:17:37.152963 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-02-15 05:17:37.152972 | orchestrator | Sunday 15 February 2026 05:17:14 +0000 (0:00:01.155) 0:03:54.873 ******* 2026-02-15 05:17:37.152980 | orchestrator | ok: [testbed-node-0 -> localhost] => { 2026-02-15 05:17:37.153006 | orchestrator |  "msg": "Installed Cilium version: 1.18.2, Target Cilium version: v1.18.2, Update needed: False\n" 2026-02-15 05:17:37.153016 | orchestrator | } 2026-02-15 05:17:37.153025 | orchestrator | 2026-02-15 05:17:37.153033 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-02-15 05:17:37.153042 | orchestrator | Sunday 15 February 2026 05:17:15 +0000 (0:00:01.177) 0:03:56.050 ******* 2026-02-15 05:17:37.153050 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:17:37.153058 | orchestrator | 2026-02-15 05:17:37.153067 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-02-15 05:17:37.153075 | orchestrator | Sunday 15 February 2026 05:17:16 +0000 (0:00:01.141) 0:03:57.192 ******* 2026-02-15 05:17:37.153083 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-02-15 05:17:37.153092 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-02-15 05:17:37.153100 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-02-15 05:17:37.153109 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-02-15 05:17:37.153118 | orchestrator | 2026-02-15 05:17:37.153126 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-02-15 05:17:37.153134 | orchestrator | Sunday 15 February 2026 05:17:22 +0000 (0:00:05.569) 0:04:02.761 ******* 2026-02-15 05:17:37.153143 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-02-15 05:17:37.153151 | orchestrator | 2026-02-15 05:17:37.153159 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-02-15 05:17:37.153168 | orchestrator | Sunday 15 February 2026 05:17:24 +0000 (0:00:02.551) 0:04:05.313 ******* 2026-02-15 05:17:37.153176 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-15 05:17:37.153185 | orchestrator | 2026-02-15 05:17:37.153193 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-02-15 05:17:37.153202 | orchestrator | Sunday 15 February 2026 05:17:27 +0000 (0:00:02.732) 0:04:08.046 ******* 2026-02-15 05:17:37.153210 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-02-15 05:17:37.153219 | orchestrator | 2026-02-15 05:17:37.153228 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-02-15 05:17:37.153243 | orchestrator | Sunday 15 February 2026 05:17:31 +0000 (0:00:04.108) 0:04:12.154 ******* 2026-02-15 05:17:37.153252 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:17:37.153261 | orchestrator | 2026-02-15 05:17:37.153269 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-02-15 05:17:37.153277 | orchestrator | Sunday 15 February 2026 05:17:32 +0000 (0:00:01.137) 0:04:13.292 ******* 2026-02-15 05:17:37.153286 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-02-15 05:17:37.153295 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-02-15 05:17:37.153303 | orchestrator | 2026-02-15 05:17:37.153311 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-02-15 05:17:37.153320 | orchestrator | Sunday 15 February 2026 05:17:35 +0000 (0:00:02.963) 0:04:16.255 ******* 2026-02-15 05:17:37.153329 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:17:37.153343 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:18:04.377998 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:18:04.378206 | orchestrator | 2026-02-15 05:18:04.378233 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-02-15 05:18:04.378257 | orchestrator | Sunday 15 February 2026 05:17:37 +0000 (0:00:01.382) 0:04:17.638 ******* 2026-02-15 05:18:04.378313 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:18:04.378334 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:18:04.378358 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:18:04.378379 | orchestrator | 2026-02-15 05:18:04.378399 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-02-15 05:18:04.378422 | orchestrator | 2026-02-15 05:18:04.378443 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-02-15 05:18:04.378464 | orchestrator | Sunday 15 February 2026 05:17:39 +0000 (0:00:02.159) 0:04:19.798 ******* 2026-02-15 05:18:04.378485 | orchestrator | ok: [testbed-manager] 2026-02-15 05:18:04.378509 | orchestrator | 2026-02-15 05:18:04.378530 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-02-15 05:18:04.378551 | orchestrator | Sunday 15 February 2026 05:17:40 +0000 (0:00:01.177) 0:04:20.975 ******* 2026-02-15 05:18:04.378589 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-02-15 05:18:04.378611 | orchestrator | 2026-02-15 05:18:04.378631 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-02-15 05:18:04.378654 | orchestrator | Sunday 15 February 2026 05:17:41 +0000 (0:00:01.476) 0:04:22.452 ******* 2026-02-15 05:18:04.378675 | orchestrator | ok: [testbed-manager] 2026-02-15 05:18:04.378695 | orchestrator | 2026-02-15 05:18:04.378716 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-02-15 05:18:04.378737 | orchestrator | 2026-02-15 05:18:04.378757 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-02-15 05:18:04.378776 | orchestrator | Sunday 15 February 2026 05:17:47 +0000 (0:00:05.453) 0:04:27.906 ******* 2026-02-15 05:18:04.378796 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:18:04.378816 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:18:04.378836 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:18:04.378856 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:18:04.378876 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:18:04.378896 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:18:04.378917 | orchestrator | 2026-02-15 05:18:04.378938 | orchestrator | TASK [Manage labels] *********************************************************** 2026-02-15 05:18:04.378958 | orchestrator | Sunday 15 February 2026 05:17:49 +0000 (0:00:02.105) 0:04:30.012 ******* 2026-02-15 05:18:04.379004 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-15 05:18:04.379024 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-15 05:18:04.379043 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-15 05:18:04.379062 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-02-15 05:18:04.379080 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-15 05:18:04.379099 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-15 05:18:04.379117 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-02-15 05:18:04.379135 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-15 05:18:04.379154 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-02-15 05:18:04.379172 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-15 05:18:04.379192 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-15 05:18:04.379211 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-02-15 05:18:04.379229 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-15 05:18:04.379248 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-15 05:18:04.379267 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-02-15 05:18:04.379300 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-15 05:18:04.379320 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-15 05:18:04.379340 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-02-15 05:18:04.379360 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-15 05:18:04.379378 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-15 05:18:04.379398 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-02-15 05:18:04.379417 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-15 05:18:04.379436 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-15 05:18:04.379456 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-02-15 05:18:04.379477 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-15 05:18:04.379496 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-15 05:18:04.379541 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-02-15 05:18:04.379561 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-15 05:18:04.379580 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-15 05:18:04.379598 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-02-15 05:18:04.379617 | orchestrator | 2026-02-15 05:18:04.379636 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-02-15 05:18:04.379654 | orchestrator | Sunday 15 February 2026 05:17:59 +0000 (0:00:10.361) 0:04:40.373 ******* 2026-02-15 05:18:04.379672 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:18:04.379692 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:18:04.379711 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:18:04.379729 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:18:04.379746 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:18:04.379765 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:18:04.379783 | orchestrator | 2026-02-15 05:18:04.379802 | orchestrator | TASK [Manage taints] *********************************************************** 2026-02-15 05:18:04.379828 | orchestrator | Sunday 15 February 2026 05:18:01 +0000 (0:00:01.975) 0:04:42.349 ******* 2026-02-15 05:18:04.379848 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:18:04.379866 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:18:04.379886 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:18:04.379904 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:18:04.379925 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:18:04.379944 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:18:04.379992 | orchestrator | 2026-02-15 05:18:04.380014 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:18:04.380034 | orchestrator | testbed-manager : ok=21  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 05:18:04.380056 | orchestrator | testbed-node-0 : ok=53  changed=14  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-15 05:18:04.380076 | orchestrator | testbed-node-1 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-15 05:18:04.380479 | orchestrator | testbed-node-2 : ok=38  changed=9  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-02-15 05:18:04.380500 | orchestrator | testbed-node-3 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-15 05:18:04.380533 | orchestrator | testbed-node-4 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-15 05:18:04.380553 | orchestrator | testbed-node-5 : ok=16  changed=1  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-02-15 05:18:04.380572 | orchestrator | 2026-02-15 05:18:04.380590 | orchestrator | 2026-02-15 05:18:04.380609 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:18:04.380630 | orchestrator | Sunday 15 February 2026 05:18:04 +0000 (0:00:02.492) 0:04:44.841 ******* 2026-02-15 05:18:04.380649 | orchestrator | =============================================================================== 2026-02-15 05:18:04.380667 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 27.17s 2026-02-15 05:18:04.380689 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 23.39s 2026-02-15 05:18:04.380709 | orchestrator | Manage labels ---------------------------------------------------------- 10.36s 2026-02-15 05:18:04.380727 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.17s 2026-02-15 05:18:04.380746 | orchestrator | k3s_server_post : Wait for Cilium resources ----------------------------- 5.57s 2026-02-15 05:18:04.380763 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.45s 2026-02-15 05:18:04.380781 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 4.54s 2026-02-15 05:18:04.380799 | orchestrator | k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites --- 4.49s 2026-02-15 05:18:04.380817 | orchestrator | k3s_server_post : Apply BGP manifests ----------------------------------- 4.11s 2026-02-15 05:18:04.380837 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 2.97s 2026-02-15 05:18:04.380855 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.96s 2026-02-15 05:18:04.380874 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 2.94s 2026-02-15 05:18:04.380894 | orchestrator | k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured --- 2.79s 2026-02-15 05:18:04.380911 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.78s 2026-02-15 05:18:04.380929 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.74s 2026-02-15 05:18:04.380947 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.73s 2026-02-15 05:18:04.381067 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 2.69s 2026-02-15 05:18:04.381090 | orchestrator | k3s_server : Stop k3s --------------------------------------------------- 2.62s 2026-02-15 05:18:04.381127 | orchestrator | k3s_server_post : Set _cilium_bgp_neighbors fact ------------------------ 2.55s 2026-02-15 05:18:04.896491 | orchestrator | Manage taints ----------------------------------------------------------- 2.49s 2026-02-15 05:18:05.234152 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-15 05:18:05.234249 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/200-infrastructure.sh 2026-02-15 05:18:05.243062 | orchestrator | + set -e 2026-02-15 05:18:05.243725 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 05:18:05.243755 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 05:18:05.243769 | orchestrator | ++ INTERACTIVE=false 2026-02-15 05:18:05.243781 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 05:18:05.243794 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 05:18:05.243806 | orchestrator | + osism apply openstackclient 2026-02-15 05:18:17.456245 | orchestrator | 2026-02-15 05:18:17 | INFO  | Task c314415a-84ed-44bd-af66-a12b55c39223 (openstackclient) was prepared for execution. 2026-02-15 05:18:17.456362 | orchestrator | 2026-02-15 05:18:17 | INFO  | It takes a moment until task c314415a-84ed-44bd-af66-a12b55c39223 (openstackclient) has been started and output is visible here. 2026-02-15 05:18:53.797160 | orchestrator | 2026-02-15 05:18:53.797296 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-02-15 05:18:53.797321 | orchestrator | 2026-02-15 05:18:53.797359 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-02-15 05:18:53.797377 | orchestrator | Sunday 15 February 2026 05:18:23 +0000 (0:00:01.931) 0:00:01.931 ******* 2026-02-15 05:18:53.797394 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-02-15 05:18:53.797410 | orchestrator | 2026-02-15 05:18:53.797424 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-02-15 05:18:53.797439 | orchestrator | Sunday 15 February 2026 05:18:25 +0000 (0:00:01.861) 0:00:03.793 ******* 2026-02-15 05:18:53.797453 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-02-15 05:18:53.797469 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient/data) 2026-02-15 05:18:53.797485 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-02-15 05:18:53.797500 | orchestrator | 2026-02-15 05:18:53.797515 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-02-15 05:18:53.797530 | orchestrator | Sunday 15 February 2026 05:18:27 +0000 (0:00:02.246) 0:00:06.039 ******* 2026-02-15 05:18:53.797546 | orchestrator | changed: [testbed-manager] 2026-02-15 05:18:53.797561 | orchestrator | 2026-02-15 05:18:53.797576 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-02-15 05:18:53.797591 | orchestrator | Sunday 15 February 2026 05:18:30 +0000 (0:00:02.170) 0:00:08.210 ******* 2026-02-15 05:18:53.797606 | orchestrator | ok: [testbed-manager] 2026-02-15 05:18:53.797622 | orchestrator | 2026-02-15 05:18:53.797637 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-02-15 05:18:53.797652 | orchestrator | Sunday 15 February 2026 05:18:33 +0000 (0:00:03.130) 0:00:11.340 ******* 2026-02-15 05:18:53.797667 | orchestrator | ok: [testbed-manager] 2026-02-15 05:18:53.797682 | orchestrator | 2026-02-15 05:18:53.797700 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-02-15 05:18:53.797718 | orchestrator | Sunday 15 February 2026 05:18:35 +0000 (0:00:01.885) 0:00:13.226 ******* 2026-02-15 05:18:53.797734 | orchestrator | ok: [testbed-manager] 2026-02-15 05:18:53.797752 | orchestrator | 2026-02-15 05:18:53.797766 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-02-15 05:18:53.797781 | orchestrator | Sunday 15 February 2026 05:18:36 +0000 (0:00:01.503) 0:00:14.729 ******* 2026-02-15 05:18:53.797796 | orchestrator | changed: [testbed-manager] 2026-02-15 05:18:53.797811 | orchestrator | 2026-02-15 05:18:53.797828 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-02-15 05:18:53.797843 | orchestrator | Sunday 15 February 2026 05:18:47 +0000 (0:00:11.152) 0:00:25.881 ******* 2026-02-15 05:18:53.797860 | orchestrator | changed: [testbed-manager] 2026-02-15 05:18:53.797877 | orchestrator | 2026-02-15 05:18:53.797893 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-02-15 05:18:53.797910 | orchestrator | Sunday 15 February 2026 05:18:49 +0000 (0:00:02.070) 0:00:27.951 ******* 2026-02-15 05:18:53.797972 | orchestrator | changed: [testbed-manager] 2026-02-15 05:18:53.797988 | orchestrator | 2026-02-15 05:18:53.798004 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-02-15 05:18:53.798092 | orchestrator | Sunday 15 February 2026 05:18:51 +0000 (0:00:01.608) 0:00:29.560 ******* 2026-02-15 05:18:53.798107 | orchestrator | ok: [testbed-manager] 2026-02-15 05:18:53.798121 | orchestrator | 2026-02-15 05:18:53.798135 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:18:53.798162 | orchestrator | testbed-manager : ok=10  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-02-15 05:18:53.798178 | orchestrator | 2026-02-15 05:18:53.798225 | orchestrator | 2026-02-15 05:18:53.798240 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:18:53.798254 | orchestrator | Sunday 15 February 2026 05:18:53 +0000 (0:00:02.039) 0:00:31.599 ******* 2026-02-15 05:18:53.798268 | orchestrator | =============================================================================== 2026-02-15 05:18:53.798282 | orchestrator | osism.services.openstackclient : Restart openstackclient service ------- 11.15s 2026-02-15 05:18:53.798298 | orchestrator | osism.services.openstackclient : Manage openstackclient service --------- 3.13s 2026-02-15 05:18:53.798313 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.25s 2026-02-15 05:18:53.798329 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.17s 2026-02-15 05:18:53.798345 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.07s 2026-02-15 05:18:53.798361 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 2.04s 2026-02-15 05:18:53.798377 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.89s 2026-02-15 05:18:53.798393 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.86s 2026-02-15 05:18:53.798408 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.61s 2026-02-15 05:18:53.798425 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.50s 2026-02-15 05:18:54.154099 | orchestrator | + osism apply -a upgrade common 2026-02-15 05:18:56.334516 | orchestrator | 2026-02-15 05:18:56 | INFO  | Task 645cf241-1fdf-4332-a849-18b684164e68 (common) was prepared for execution. 2026-02-15 05:18:56.334628 | orchestrator | 2026-02-15 05:18:56 | INFO  | It takes a moment until task 645cf241-1fdf-4332-a849-18b684164e68 (common) has been started and output is visible here. 2026-02-15 05:19:16.423060 | orchestrator | 2026-02-15 05:19:16.423205 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-15 05:19:16.423233 | orchestrator | 2026-02-15 05:19:16.423275 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-15 05:19:16.423296 | orchestrator | Sunday 15 February 2026 05:19:02 +0000 (0:00:01.954) 0:00:01.954 ******* 2026-02-15 05:19:16.423316 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 05:19:16.423335 | orchestrator | 2026-02-15 05:19:16.423354 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-15 05:19:16.423372 | orchestrator | Sunday 15 February 2026 05:19:06 +0000 (0:00:03.365) 0:00:05.320 ******* 2026-02-15 05:19:16.423391 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:19:16.423409 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:19:16.423427 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:19:16.423445 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:19:16.423464 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:19:16.423483 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:19:16.423502 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:19:16.423522 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:19:16.423542 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:19:16.423562 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:19:16.423582 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:19:16.423601 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:19:16.423620 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:19:16.423667 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:19:16.423686 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:19:16.423705 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:19:16.423722 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:19:16.423740 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:19:16.423759 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:19:16.423777 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:19:16.423794 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:19:16.423812 | orchestrator | 2026-02-15 05:19:16.423832 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-15 05:19:16.423849 | orchestrator | Sunday 15 February 2026 05:19:10 +0000 (0:00:04.075) 0:00:09.395 ******* 2026-02-15 05:19:16.423869 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 05:19:16.423882 | orchestrator | 2026-02-15 05:19:16.423893 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-15 05:19:16.423928 | orchestrator | Sunday 15 February 2026 05:19:13 +0000 (0:00:03.167) 0:00:12.562 ******* 2026-02-15 05:19:16.423945 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:16.423970 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:16.424015 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:16.424027 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:16.424039 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:16.424062 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:16.424074 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:16.424258 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:16.424272 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:16.424301 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226012 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226207 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226239 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226247 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226253 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226260 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226267 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226291 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226298 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226312 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226319 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:19.226326 | orchestrator | 2026-02-15 05:19:19.226334 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-15 05:19:19.226340 | orchestrator | Sunday 15 February 2026 05:19:18 +0000 (0:00:05.085) 0:00:17.647 ******* 2026-02-15 05:19:19.226348 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:19.226356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:19.226362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:19.226415 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:19.226432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:21.676354 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:21.676471 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:19:21.676490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:21.676507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:21.676519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:21.676579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:21.676593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:21.676605 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:19:21.676616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:21.676666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:21.676681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:21.676705 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:19:21.676724 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:19:21.676743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:21.676762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:21.676781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:21.676800 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:19:21.676813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:21.676824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:21.676845 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:19:21.676867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:25.417202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:25.417310 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:19:25.417327 | orchestrator | 2026-02-15 05:19:25.417340 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-15 05:19:25.417353 | orchestrator | Sunday 15 February 2026 05:19:21 +0000 (0:00:03.316) 0:00:20.964 ******* 2026-02-15 05:19:25.417384 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:25.417399 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:25.417411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:25.417520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:25.417533 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:25.417580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:25.417623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:25.417636 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:19:25.417648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:25.417659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:25.417671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:25.417681 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:19:25.417693 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:19:25.417706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:25.417720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:25.417741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:25.417769 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:19:38.583620 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:38.583704 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:19:38.583717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:38.583724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:38.583729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:38.583746 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:38.583752 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:19:38.583757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:38.583762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:19:38.583767 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:19:38.583771 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:19:38.583776 | orchestrator | 2026-02-15 05:19:38.583782 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-15 05:19:38.583788 | orchestrator | Sunday 15 February 2026 05:19:25 +0000 (0:00:03.742) 0:00:24.706 ******* 2026-02-15 05:19:38.583795 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:19:38.583802 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:19:38.583810 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:19:38.583831 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:19:38.583839 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:19:38.583847 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:19:38.583854 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:19:38.583859 | orchestrator | 2026-02-15 05:19:38.583863 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-15 05:19:38.583868 | orchestrator | Sunday 15 February 2026 05:19:27 +0000 (0:00:02.326) 0:00:27.032 ******* 2026-02-15 05:19:38.583872 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:19:38.583877 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:19:38.583926 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:19:38.583933 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:19:38.583937 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:19:38.583942 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:19:38.583950 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:19:38.583955 | orchestrator | 2026-02-15 05:19:38.583959 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-15 05:19:38.583964 | orchestrator | Sunday 15 February 2026 05:19:29 +0000 (0:00:02.144) 0:00:29.177 ******* 2026-02-15 05:19:38.583968 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:19:38.583973 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:19:38.583977 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:19:38.583982 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:19:38.583986 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:19:38.583991 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:19:38.583995 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:19:38.584000 | orchestrator | 2026-02-15 05:19:38.584004 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-15 05:19:38.584009 | orchestrator | Sunday 15 February 2026 05:19:31 +0000 (0:00:02.126) 0:00:31.303 ******* 2026-02-15 05:19:38.584014 | orchestrator | changed: [testbed-manager] 2026-02-15 05:19:38.584023 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:19:38.584028 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:19:38.584032 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:19:38.584037 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:19:38.584041 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:19:38.584046 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:19:38.584050 | orchestrator | 2026-02-15 05:19:38.584055 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-15 05:19:38.584059 | orchestrator | Sunday 15 February 2026 05:19:35 +0000 (0:00:03.273) 0:00:34.577 ******* 2026-02-15 05:19:38.584066 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:38.584071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:38.584076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:38.584081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:38.584090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:40.464479 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464607 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:40.464623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:19:40.464636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464647 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464659 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464689 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464702 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464726 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464738 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464764 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464784 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:19:40.464861 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:02.069179 | orchestrator | 2026-02-15 05:20:02.069386 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-15 05:20:02.069423 | orchestrator | Sunday 15 February 2026 05:19:40 +0000 (0:00:05.181) 0:00:39.759 ******* 2026-02-15 05:20:02.069444 | orchestrator | [WARNING]: Skipped 2026-02-15 05:20:02.069464 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-15 05:20:02.069484 | orchestrator | to this access issue: 2026-02-15 05:20:02.069504 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-15 05:20:02.069523 | orchestrator | directory 2026-02-15 05:20:02.069542 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 05:20:02.069557 | orchestrator | 2026-02-15 05:20:02.069569 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-15 05:20:02.069580 | orchestrator | Sunday 15 February 2026 05:19:42 +0000 (0:00:02.352) 0:00:42.111 ******* 2026-02-15 05:20:02.069591 | orchestrator | [WARNING]: Skipped 2026-02-15 05:20:02.069602 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-15 05:20:02.069613 | orchestrator | to this access issue: 2026-02-15 05:20:02.069623 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-15 05:20:02.069634 | orchestrator | directory 2026-02-15 05:20:02.069645 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 05:20:02.069656 | orchestrator | 2026-02-15 05:20:02.069667 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-15 05:20:02.069681 | orchestrator | Sunday 15 February 2026 05:19:44 +0000 (0:00:02.003) 0:00:44.115 ******* 2026-02-15 05:20:02.069694 | orchestrator | [WARNING]: Skipped 2026-02-15 05:20:02.069707 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-15 05:20:02.069720 | orchestrator | to this access issue: 2026-02-15 05:20:02.069734 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-15 05:20:02.069747 | orchestrator | directory 2026-02-15 05:20:02.069760 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 05:20:02.069773 | orchestrator | 2026-02-15 05:20:02.069788 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-15 05:20:02.069801 | orchestrator | Sunday 15 February 2026 05:19:46 +0000 (0:00:01.856) 0:00:45.972 ******* 2026-02-15 05:20:02.069814 | orchestrator | [WARNING]: Skipped 2026-02-15 05:20:02.069827 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-15 05:20:02.069839 | orchestrator | to this access issue: 2026-02-15 05:20:02.069852 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-15 05:20:02.069894 | orchestrator | directory 2026-02-15 05:20:02.069913 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 05:20:02.069926 | orchestrator | 2026-02-15 05:20:02.069939 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-15 05:20:02.069953 | orchestrator | Sunday 15 February 2026 05:19:48 +0000 (0:00:01.885) 0:00:47.858 ******* 2026-02-15 05:20:02.069965 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:20:02.069979 | orchestrator | changed: [testbed-manager] 2026-02-15 05:20:02.069993 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:20:02.070006 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:20:02.070070 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:20:02.070083 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:20:02.070094 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:20:02.070107 | orchestrator | 2026-02-15 05:20:02.070119 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-15 05:20:02.070128 | orchestrator | Sunday 15 February 2026 05:19:52 +0000 (0:00:03.974) 0:00:51.832 ******* 2026-02-15 05:20:02.070163 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:20:02.070174 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:20:02.070184 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:20:02.070193 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:20:02.070202 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:20:02.070212 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:20:02.070221 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:20:02.070231 | orchestrator | 2026-02-15 05:20:02.070240 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-15 05:20:02.070250 | orchestrator | Sunday 15 February 2026 05:19:55 +0000 (0:00:03.284) 0:00:55.117 ******* 2026-02-15 05:20:02.070259 | orchestrator | ok: [testbed-manager] 2026-02-15 05:20:02.070269 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:20:02.070278 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:20:02.070287 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:20:02.070297 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:20:02.070306 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:20:02.070315 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:20:02.070324 | orchestrator | 2026-02-15 05:20:02.070334 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-15 05:20:02.070343 | orchestrator | Sunday 15 February 2026 05:19:58 +0000 (0:00:03.026) 0:00:58.144 ******* 2026-02-15 05:20:02.070382 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:02.070397 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:02.070408 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:02.070418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:02.070439 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:02.070451 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:02.070461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:02.070471 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:02.070493 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:10.540191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:10.540253 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:10.540263 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:10.540285 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:10.540293 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:10.540300 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:10.540315 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:10.540333 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:10.540340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:10.540347 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:10.540359 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:10.540367 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:10.540374 | orchestrator | 2026-02-15 05:20:10.540382 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-15 05:20:10.540389 | orchestrator | Sunday 15 February 2026 05:20:02 +0000 (0:00:03.211) 0:01:01.355 ******* 2026-02-15 05:20:10.540396 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:20:10.540403 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:20:10.540410 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:20:10.540416 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:20:10.540423 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:20:10.540430 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:20:10.540436 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:20:10.540443 | orchestrator | 2026-02-15 05:20:10.540450 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-15 05:20:10.540457 | orchestrator | Sunday 15 February 2026 05:20:05 +0000 (0:00:03.082) 0:01:04.438 ******* 2026-02-15 05:20:10.540464 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:20:10.540471 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:20:10.540478 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:20:10.540485 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:20:10.540492 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:20:10.540499 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:20:10.540506 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:20:10.540513 | orchestrator | 2026-02-15 05:20:10.540520 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-15 05:20:10.540528 | orchestrator | Sunday 15 February 2026 05:20:08 +0000 (0:00:03.216) 0:01:07.654 ******* 2026-02-15 05:20:10.540540 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:12.523103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:12.523214 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:12.523231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:12.523256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:12.523313 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:12.523327 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:12.523344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:12.523393 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:12.523406 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:12.523418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:12.523430 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:12.523444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:12.523456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:12.523472 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:12.523497 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:15.286273 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:15.286374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:15.286389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:15.286407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:15.286419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:15.286431 | orchestrator | 2026-02-15 05:20:15.286444 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-15 05:20:15.286456 | orchestrator | Sunday 15 February 2026 05:20:12 +0000 (0:00:04.165) 0:01:11.819 ******* 2026-02-15 05:20:15.286485 | orchestrator | changed: [testbed-manager] => { 2026-02-15 05:20:15.286497 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:20:15.286519 | orchestrator | } 2026-02-15 05:20:15.286530 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:20:15.286541 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:20:15.286551 | orchestrator | } 2026-02-15 05:20:15.286562 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:20:15.286572 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:20:15.286583 | orchestrator | } 2026-02-15 05:20:15.286594 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:20:15.286627 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:20:15.286639 | orchestrator | } 2026-02-15 05:20:15.286649 | orchestrator | changed: [testbed-node-3] => { 2026-02-15 05:20:15.286660 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:20:15.286671 | orchestrator | } 2026-02-15 05:20:15.286681 | orchestrator | changed: [testbed-node-4] => { 2026-02-15 05:20:15.286692 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:20:15.286702 | orchestrator | } 2026-02-15 05:20:15.286713 | orchestrator | changed: [testbed-node-5] => { 2026-02-15 05:20:15.286723 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:20:15.286734 | orchestrator | } 2026-02-15 05:20:15.286745 | orchestrator | 2026-02-15 05:20:15.286769 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:20:15.286781 | orchestrator | Sunday 15 February 2026 05:20:14 +0000 (0:00:02.087) 0:01:13.906 ******* 2026-02-15 05:20:15.286797 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:15.286831 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:15.286845 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:15.286897 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:20:15.286912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:15.286926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:15.286939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:15.286959 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:20:15.286973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:15.286992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:15.287005 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:15.287018 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:20:15.287042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:24.498690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:24.498808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:24.498827 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:20:24.498841 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:24.498951 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:24.498967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:24.498978 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:20:24.498990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:24.499011 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:24.499042 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:24.499054 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:20:24.499066 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:24.499077 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:24.499097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:24.499109 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:20:24.499120 | orchestrator | 2026-02-15 05:20:24.499132 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:20:24.499144 | orchestrator | Sunday 15 February 2026 05:20:17 +0000 (0:00:03.165) 0:01:17.072 ******* 2026-02-15 05:20:24.499155 | orchestrator | 2026-02-15 05:20:24.499165 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:20:24.499176 | orchestrator | Sunday 15 February 2026 05:20:18 +0000 (0:00:00.461) 0:01:17.533 ******* 2026-02-15 05:20:24.499187 | orchestrator | 2026-02-15 05:20:24.499198 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:20:24.499209 | orchestrator | Sunday 15 February 2026 05:20:18 +0000 (0:00:00.511) 0:01:18.045 ******* 2026-02-15 05:20:24.499220 | orchestrator | 2026-02-15 05:20:24.499230 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:20:24.499241 | orchestrator | Sunday 15 February 2026 05:20:19 +0000 (0:00:00.457) 0:01:18.502 ******* 2026-02-15 05:20:24.499251 | orchestrator | 2026-02-15 05:20:24.499267 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:20:24.499277 | orchestrator | Sunday 15 February 2026 05:20:19 +0000 (0:00:00.467) 0:01:18.970 ******* 2026-02-15 05:20:24.499288 | orchestrator | 2026-02-15 05:20:24.499299 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:20:24.499309 | orchestrator | Sunday 15 February 2026 05:20:20 +0000 (0:00:00.725) 0:01:19.696 ******* 2026-02-15 05:20:24.499320 | orchestrator | 2026-02-15 05:20:24.499330 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:20:24.499341 | orchestrator | Sunday 15 February 2026 05:20:20 +0000 (0:00:00.540) 0:01:20.236 ******* 2026-02-15 05:20:24.499351 | orchestrator | 2026-02-15 05:20:24.499362 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-15 05:20:24.499372 | orchestrator | Sunday 15 February 2026 05:20:21 +0000 (0:00:00.840) 0:01:21.076 ******* 2026-02-15 05:20:24.499398 | orchestrator | fatal: [testbed-manager]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload__15t1mhp/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload__15t1mhp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload__15t1mhp/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-15 05:20:28.723066 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_6zwnr36v/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_6zwnr36v/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_6zwnr36v/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-15 05:20:28.723194 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_xjdab3h3/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_xjdab3h3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_xjdab3h3/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-15 05:20:28.723242 | orchestrator | fatal: [testbed-node-3]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_onuhwgzn/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_onuhwgzn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_onuhwgzn/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-15 05:20:28.723269 | orchestrator | fatal: [testbed-node-4]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_8a9wcrp4/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_8a9wcrp4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_8a9wcrp4/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-15 05:20:29.262807 | orchestrator | 2026-02-15 05:20:29 | INFO  | Task 2b9778f8-ae01-4004-985d-0906db530fd9 (common) was prepared for execution. 2026-02-15 05:20:29.262950 | orchestrator | 2026-02-15 05:20:29 | INFO  | It takes a moment until task 2b9778f8-ae01-4004-985d-0906db530fd9 (common) has been started and output is visible here. 2026-02-15 05:20:39.326458 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_x7p01lyd/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_x7p01lyd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_x7p01lyd/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-15 05:20:39.326616 | orchestrator | fatal: [testbed-node-5]: FAILED! => {"changed": true, "msg": "'Traceback (most recent call last):\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 275, in _raise_for_status\\n response.raise_for_status()\\n File \"/usr/lib/python3/dist-packages/requests/models.py\", line 1021, in raise_for_status\\n raise HTTPError(http_error_msg, response=self)\\nrequests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd\\n\\nThe above exception was the direct cause of the following exception:\\n\\nTraceback (most recent call last):\\n File \"/tmp/ansible_kolla_container_payload_y382ss3c/ansible_kolla_container_payload.zip/ansible/modules/kolla_container.py\", line 421, in main\\n result = bool(getattr(cw, module.params.get(\\'action\\'))())\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/tmp/ansible_kolla_container_payload_y382ss3c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 361, in recreate_or_restart_container\\n self.pull_image()\\n File \"/tmp/ansible_kolla_container_payload_y382ss3c/ansible_kolla_container_payload.zip/ansible/module_utils/kolla_docker_worker.py\", line 202, in pull_image\\n json.loads(line.strip().decode(\\'utf-8\\')) for line in self.dc.pull(\\n ^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/api/image.py\", line 429, in pull\\n self._raise_for_status(response)\\n File \"/usr/lib/python3/dist-packages/docker/api/client.py\", line 277, in _raise_for_status\\n raise create_api_error_from_http_exception(e) from e\\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\\n File \"/usr/lib/python3/dist-packages/docker/errors.py\", line 39, in create_api_error_from_http_exception\\n raise cls(e, response=response, explanation=explanation) from e\\ndocker.errors.APIError: 500 Server Error for http+docker://localhost/v1.47/images/create?tag=5.0.8.20251208&fromImage=registry.osism.tech%2Fkolla%2Frelease%2Ffluentd: Internal Server Error (\"unknown: artifact kolla/release/fluentd:5.0.8.20251208 not found\")\\n'"} 2026-02-15 05:20:39.326637 | orchestrator | 2026-02-15 05:20:39.326651 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:20:39.326665 | orchestrator | testbed-manager : ok=18  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-15 05:20:39.326677 | orchestrator | testbed-node-0 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-15 05:20:39.326688 | orchestrator | testbed-node-1 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-15 05:20:39.326699 | orchestrator | testbed-node-2 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-15 05:20:39.326724 | orchestrator | testbed-node-3 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-15 05:20:39.326736 | orchestrator | testbed-node-4 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-15 05:20:39.326747 | orchestrator | testbed-node-5 : ok=14  changed=5  unreachable=0 failed=1  skipped=6  rescued=0 ignored=0 2026-02-15 05:20:39.326758 | orchestrator | 2026-02-15 05:20:39.326768 | orchestrator | 2026-02-15 05:20:39.326779 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:20:39.326790 | orchestrator | Sunday 15 February 2026 05:20:28 +0000 (0:00:06.951) 0:01:28.028 ******* 2026-02-15 05:20:39.326801 | orchestrator | =============================================================================== 2026-02-15 05:20:39.326812 | orchestrator | common : Restart fluentd container -------------------------------------- 6.95s 2026-02-15 05:20:39.326888 | orchestrator | common : Copying over config.json files for services -------------------- 5.18s 2026-02-15 05:20:39.326900 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.09s 2026-02-15 05:20:39.326910 | orchestrator | service-check-containers : common | Check containers -------------------- 4.17s 2026-02-15 05:20:39.326921 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.08s 2026-02-15 05:20:39.326932 | orchestrator | common : Flush handlers ------------------------------------------------- 4.00s 2026-02-15 05:20:39.326943 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.97s 2026-02-15 05:20:39.326954 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.74s 2026-02-15 05:20:39.326965 | orchestrator | common : include_tasks -------------------------------------------------- 3.37s 2026-02-15 05:20:39.326977 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 3.32s 2026-02-15 05:20:39.326990 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.29s 2026-02-15 05:20:39.327004 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.27s 2026-02-15 05:20:39.327016 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.22s 2026-02-15 05:20:39.327029 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.21s 2026-02-15 05:20:39.327059 | orchestrator | common : include_tasks -------------------------------------------------- 3.17s 2026-02-15 05:20:39.327072 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.17s 2026-02-15 05:20:39.327084 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.08s 2026-02-15 05:20:39.327095 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.03s 2026-02-15 05:20:39.327106 | orchestrator | common : Find custom fluentd input config files ------------------------- 2.35s 2026-02-15 05:20:39.327117 | orchestrator | common : Ensure /var/log/journal exists on EL10 systems ----------------- 2.33s 2026-02-15 05:20:39.327129 | orchestrator | 2026-02-15 05:20:39.327140 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-02-15 05:20:39.327150 | orchestrator | 2026-02-15 05:20:39.327161 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-15 05:20:39.327172 | orchestrator | Sunday 15 February 2026 05:20:35 +0000 (0:00:02.295) 0:00:02.295 ******* 2026-02-15 05:20:39.327188 | orchestrator | included: /ansible/roles/common/tasks/upgrade.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 05:20:39.327200 | orchestrator | 2026-02-15 05:20:39.327219 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-02-15 05:20:48.134425 | orchestrator | Sunday 15 February 2026 05:20:39 +0000 (0:00:03.485) 0:00:05.781 ******* 2026-02-15 05:20:48.134533 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:20:48.134574 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:20:48.134587 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:20:48.134601 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:20:48.134620 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:20:48.134637 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:20:48.134655 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:20:48.134674 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:20:48.134691 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:20:48.134708 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-02-15 05:20:48.134727 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:20:48.134744 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:20:48.134763 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:20:48.134778 | orchestrator | ok: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:20:48.134797 | orchestrator | ok: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:20:48.134816 | orchestrator | ok: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:20:48.134879 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-02-15 05:20:48.134897 | orchestrator | ok: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:20:48.134913 | orchestrator | ok: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:20:48.134931 | orchestrator | ok: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:20:48.134949 | orchestrator | ok: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-02-15 05:20:48.134969 | orchestrator | 2026-02-15 05:20:48.134990 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-02-15 05:20:48.135010 | orchestrator | Sunday 15 February 2026 05:20:42 +0000 (0:00:03.355) 0:00:09.137 ******* 2026-02-15 05:20:48.135030 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 05:20:48.135052 | orchestrator | 2026-02-15 05:20:48.135071 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-02-15 05:20:48.135089 | orchestrator | Sunday 15 February 2026 05:20:45 +0000 (0:00:02.941) 0:00:12.078 ******* 2026-02-15 05:20:48.135113 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:48.135160 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:48.135213 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:48.135276 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:48.135299 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:48.135319 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:48.135340 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:20:48.135359 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:48.135379 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:48.135410 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:48.135450 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884242 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884376 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884404 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884425 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884447 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884499 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884544 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884563 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884605 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884626 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:20:50.884647 | orchestrator | 2026-02-15 05:20:50.884663 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-02-15 05:20:50.884675 | orchestrator | Sunday 15 February 2026 05:20:49 +0000 (0:00:04.371) 0:00:16.449 ******* 2026-02-15 05:20:50.884688 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:50.884700 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:50.884715 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:50.884736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:50.884757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:50.884779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.842454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.842587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.842614 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.842636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:52.842689 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:20:52.842710 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:20:52.842728 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:20:52.842747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.842767 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:52.842937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.842962 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:20:52.843007 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:52.843029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.843050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.843083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.843104 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:20:52.843125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.843145 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:20:52.843165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:52.843252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.843275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:52.843297 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:20:52.843317 | orchestrator | 2026-02-15 05:20:52.843336 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-02-15 05:20:52.843365 | orchestrator | Sunday 15 February 2026 05:20:52 +0000 (0:00:02.837) 0:00:19.288 ******* 2026-02-15 05:20:55.955217 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:55.955372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:55.955416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:55.955430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:55.956285 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:55.956305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:55.956320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:55.956336 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:20:55.956370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:55.956382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:55.956405 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:55.956417 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:20:55.956427 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:20:55.956438 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:55.956450 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:55.956476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:20:55.956488 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:20:55.956500 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:20:55.956521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:08.174382 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:21:08.174519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:08.174535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:08.174549 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:08.174564 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:21:08.174578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:08.174605 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:08.174616 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:21:08.174627 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:21:08.174639 | orchestrator | 2026-02-15 05:21:08.174650 | orchestrator | TASK [common : Ensure /var/log/journal exists on EL10 systems] ***************** 2026-02-15 05:21:08.174662 | orchestrator | Sunday 15 February 2026 05:20:55 +0000 (0:00:03.119) 0:00:22.407 ******* 2026-02-15 05:21:08.174673 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:21:08.174684 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:21:08.174694 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:21:08.174705 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:21:08.174716 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:21:08.174726 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:21:08.174737 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:21:08.174759 | orchestrator | 2026-02-15 05:21:08.174770 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-02-15 05:21:08.174781 | orchestrator | Sunday 15 February 2026 05:20:58 +0000 (0:00:02.118) 0:00:24.525 ******* 2026-02-15 05:21:08.174792 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:21:08.174803 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:21:08.174813 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:21:08.174851 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:21:08.174862 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:21:08.174873 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:21:08.174900 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:21:08.174912 | orchestrator | 2026-02-15 05:21:08.174925 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-02-15 05:21:08.174939 | orchestrator | Sunday 15 February 2026 05:21:00 +0000 (0:00:02.000) 0:00:26.525 ******* 2026-02-15 05:21:08.174951 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:21:08.174964 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:21:08.174976 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:21:08.174988 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:21:08.175001 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:21:08.175014 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:21:08.175027 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:21:08.175039 | orchestrator | 2026-02-15 05:21:08.175051 | orchestrator | TASK [common : Copying over kolla.target] ************************************** 2026-02-15 05:21:08.175064 | orchestrator | Sunday 15 February 2026 05:21:02 +0000 (0:00:02.297) 0:00:28.823 ******* 2026-02-15 05:21:08.175077 | orchestrator | ok: [testbed-manager] 2026-02-15 05:21:08.175090 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:21:08.175102 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:21:08.175121 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:21:08.175141 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:21:08.175159 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:21:08.175180 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:21:08.175202 | orchestrator | 2026-02-15 05:21:08.175221 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-02-15 05:21:08.175237 | orchestrator | Sunday 15 February 2026 05:21:05 +0000 (0:00:03.027) 0:00:31.851 ******* 2026-02-15 05:21:08.175251 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:08.175267 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:08.175281 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:08.175310 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:08.175324 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:08.175347 | orchestrator | ok: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028524 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:11.028633 | orchestrator | ok: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028650 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:11.028662 | orchestrator | ok: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028712 | orchestrator | ok: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028724 | orchestrator | ok: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028736 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028768 | orchestrator | ok: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028781 | orchestrator | ok: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028793 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028804 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028903 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028921 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028933 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028944 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:11.028956 | orchestrator | 2026-02-15 05:21:11.028968 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-02-15 05:21:11.028980 | orchestrator | Sunday 15 February 2026 05:21:10 +0000 (0:00:04.665) 0:00:36.516 ******* 2026-02-15 05:21:11.028991 | orchestrator | [WARNING]: Skipped 2026-02-15 05:21:11.029010 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-02-15 05:21:29.804503 | orchestrator | to this access issue: 2026-02-15 05:21:29.804621 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-02-15 05:21:29.804641 | orchestrator | directory 2026-02-15 05:21:29.804654 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 05:21:29.804666 | orchestrator | 2026-02-15 05:21:29.804678 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-02-15 05:21:29.804690 | orchestrator | Sunday 15 February 2026 05:21:12 +0000 (0:00:02.398) 0:00:38.915 ******* 2026-02-15 05:21:29.804701 | orchestrator | [WARNING]: Skipped 2026-02-15 05:21:29.804711 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-02-15 05:21:29.804722 | orchestrator | to this access issue: 2026-02-15 05:21:29.804733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-02-15 05:21:29.804744 | orchestrator | directory 2026-02-15 05:21:29.804755 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 05:21:29.804765 | orchestrator | 2026-02-15 05:21:29.804776 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-02-15 05:21:29.804787 | orchestrator | Sunday 15 February 2026 05:21:14 +0000 (0:00:01.902) 0:00:40.818 ******* 2026-02-15 05:21:29.804797 | orchestrator | [WARNING]: Skipped 2026-02-15 05:21:29.804857 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-02-15 05:21:29.804869 | orchestrator | to this access issue: 2026-02-15 05:21:29.804880 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-02-15 05:21:29.804891 | orchestrator | directory 2026-02-15 05:21:29.804901 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 05:21:29.804938 | orchestrator | 2026-02-15 05:21:29.804950 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-02-15 05:21:29.804960 | orchestrator | Sunday 15 February 2026 05:21:16 +0000 (0:00:01.889) 0:00:42.708 ******* 2026-02-15 05:21:29.804971 | orchestrator | [WARNING]: Skipped 2026-02-15 05:21:29.804982 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-02-15 05:21:29.804993 | orchestrator | to this access issue: 2026-02-15 05:21:29.805004 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-02-15 05:21:29.805014 | orchestrator | directory 2026-02-15 05:21:29.805025 | orchestrator | ok: [testbed-manager -> localhost] 2026-02-15 05:21:29.805036 | orchestrator | 2026-02-15 05:21:29.805049 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-02-15 05:21:29.805061 | orchestrator | Sunday 15 February 2026 05:21:18 +0000 (0:00:01.839) 0:00:44.547 ******* 2026-02-15 05:21:29.805074 | orchestrator | ok: [testbed-manager] 2026-02-15 05:21:29.805087 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:21:29.805099 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:21:29.805111 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:21:29.805123 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:21:29.805135 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:21:29.805148 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:21:29.805160 | orchestrator | 2026-02-15 05:21:29.805172 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-02-15 05:21:29.805184 | orchestrator | Sunday 15 February 2026 05:21:21 +0000 (0:00:03.837) 0:00:48.384 ******* 2026-02-15 05:21:29.805197 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:21:29.805211 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:21:29.805223 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:21:29.805235 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:21:29.805247 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:21:29.805259 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:21:29.805288 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-02-15 05:21:29.805301 | orchestrator | 2026-02-15 05:21:29.805313 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-02-15 05:21:29.805326 | orchestrator | Sunday 15 February 2026 05:21:25 +0000 (0:00:03.214) 0:00:51.599 ******* 2026-02-15 05:21:29.805338 | orchestrator | ok: [testbed-manager] 2026-02-15 05:21:29.805350 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:21:29.805362 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:21:29.805374 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:21:29.805386 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:21:29.805398 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:21:29.805408 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:21:29.805419 | orchestrator | 2026-02-15 05:21:29.805430 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-02-15 05:21:29.805440 | orchestrator | Sunday 15 February 2026 05:21:28 +0000 (0:00:02.872) 0:00:54.471 ******* 2026-02-15 05:21:29.805454 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:29.805496 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:29.805510 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:29.805521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:29.805550 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:29.805580 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:29.805593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:29.805604 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:29.805639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:39.545309 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:39.545425 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:39.545444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:39.545457 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:39.545487 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:39.545499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:39.545532 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:39.545561 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:39.545574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:39.545585 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:39.545596 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:39.545607 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:39.545619 | orchestrator | 2026-02-15 05:21:39.545631 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-02-15 05:21:39.545643 | orchestrator | Sunday 15 February 2026 05:21:30 +0000 (0:00:02.887) 0:00:57.359 ******* 2026-02-15 05:21:39.545660 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:21:39.545688 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:21:39.545710 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:21:39.545721 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:21:39.545731 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:21:39.545742 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:21:39.545761 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-02-15 05:21:39.545772 | orchestrator | 2026-02-15 05:21:39.545783 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-02-15 05:21:39.545816 | orchestrator | Sunday 15 February 2026 05:21:33 +0000 (0:00:03.067) 0:01:00.427 ******* 2026-02-15 05:21:39.545828 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:21:39.545839 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:21:39.545850 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:21:39.545860 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:21:39.545871 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:21:39.545882 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:21:39.545892 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-02-15 05:21:39.545903 | orchestrator | 2026-02-15 05:21:39.545914 | orchestrator | TASK [service-check-containers : common | Check containers] ******************** 2026-02-15 05:21:39.545924 | orchestrator | Sunday 15 February 2026 05:21:37 +0000 (0:00:03.225) 0:01:03.653 ******* 2026-02-15 05:21:39.545945 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:41.731600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:41.731703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:41.731719 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:41.731746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:41.731781 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:41.731852 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-02-15 05:21:41.731869 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:41.731900 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:41.731912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:41.731924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:41.731941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:41.731961 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:41.731973 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:41.731987 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:41.732007 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:44.669993 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:44.670153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:44.670170 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:44.670223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:44.670236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:21:44.670248 | orchestrator | 2026-02-15 05:21:44.670261 | orchestrator | TASK [service-check-containers : common | Notify handlers to restart containers] *** 2026-02-15 05:21:44.670274 | orchestrator | Sunday 15 February 2026 05:21:41 +0000 (0:00:04.533) 0:01:08.186 ******* 2026-02-15 05:21:44.670285 | orchestrator | changed: [testbed-manager] => { 2026-02-15 05:21:44.670297 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:21:44.670308 | orchestrator | } 2026-02-15 05:21:44.670319 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:21:44.670330 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:21:44.670340 | orchestrator | } 2026-02-15 05:21:44.670351 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:21:44.670362 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:21:44.670373 | orchestrator | } 2026-02-15 05:21:44.670383 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:21:44.670394 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:21:44.670404 | orchestrator | } 2026-02-15 05:21:44.670414 | orchestrator | changed: [testbed-node-3] => { 2026-02-15 05:21:44.670425 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:21:44.670435 | orchestrator | } 2026-02-15 05:21:44.670446 | orchestrator | changed: [testbed-node-4] => { 2026-02-15 05:21:44.670456 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:21:44.670467 | orchestrator | } 2026-02-15 05:21:44.670477 | orchestrator | changed: [testbed-node-5] => { 2026-02-15 05:21:44.670488 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:21:44.670498 | orchestrator | } 2026-02-15 05:21:44.670509 | orchestrator | 2026-02-15 05:21:44.670522 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:21:44.670535 | orchestrator | Sunday 15 February 2026 05:21:43 +0000 (0:00:02.151) 0:01:10.337 ******* 2026-02-15 05:21:44.670550 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:21:44.670591 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:44.670614 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:44.670626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:21:44.670638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:44.670649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:44.670660 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:21:44.670672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:21:44.670683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:44.670694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:21:44.670705 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:21:44.670729 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:22:30.558846 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:22:30.558987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:22:30.559038 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:22:30.559054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:22:30.559069 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:22:30.559081 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:22:30.559094 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:22:30.559107 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:22:30.559118 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:22:30.559218 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:22:30.559236 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:22:30.559248 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:22:30.559260 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/fluentd:5.0.8.20251208', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-02-15 05:22:30.559277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/kolla-toolbox:20.3.1.20251208', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:22:30.559290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cron:3.0.20251208', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:22:30.559309 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:22:30.559329 | orchestrator | 2026-02-15 05:22:30.559349 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:22:30.559372 | orchestrator | Sunday 15 February 2026 05:21:46 +0000 (0:00:02.973) 0:01:13.311 ******* 2026-02-15 05:22:30.559392 | orchestrator | 2026-02-15 05:22:30.559413 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:22:30.559432 | orchestrator | Sunday 15 February 2026 05:21:47 +0000 (0:00:00.462) 0:01:13.773 ******* 2026-02-15 05:22:30.559452 | orchestrator | 2026-02-15 05:22:30.559468 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:22:30.559480 | orchestrator | Sunday 15 February 2026 05:21:47 +0000 (0:00:00.451) 0:01:14.225 ******* 2026-02-15 05:22:30.559494 | orchestrator | 2026-02-15 05:22:30.559506 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:22:30.559519 | orchestrator | Sunday 15 February 2026 05:21:48 +0000 (0:00:00.450) 0:01:14.675 ******* 2026-02-15 05:22:30.559542 | orchestrator | 2026-02-15 05:22:30.559562 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:22:30.559580 | orchestrator | Sunday 15 February 2026 05:21:48 +0000 (0:00:00.452) 0:01:15.127 ******* 2026-02-15 05:22:30.559598 | orchestrator | 2026-02-15 05:22:30.559618 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:22:30.559637 | orchestrator | Sunday 15 February 2026 05:21:49 +0000 (0:00:00.714) 0:01:15.842 ******* 2026-02-15 05:22:30.559655 | orchestrator | 2026-02-15 05:22:30.559673 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-02-15 05:22:30.559692 | orchestrator | Sunday 15 February 2026 05:21:49 +0000 (0:00:00.440) 0:01:16.282 ******* 2026-02-15 05:22:30.559711 | orchestrator | 2026-02-15 05:22:30.559729 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-02-15 05:22:30.559744 | orchestrator | Sunday 15 February 2026 05:21:50 +0000 (0:00:00.817) 0:01:17.100 ******* 2026-02-15 05:22:30.559755 | orchestrator | changed: [testbed-manager] 2026-02-15 05:22:30.559824 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:22:30.559835 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:22:30.559846 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:22:30.559857 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:22:30.559868 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:22:30.559890 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:23:22.547715 | orchestrator | 2026-02-15 05:23:22.547894 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-02-15 05:23:22.547911 | orchestrator | Sunday 15 February 2026 05:22:30 +0000 (0:00:39.911) 0:01:57.011 ******* 2026-02-15 05:23:22.547922 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:23:22.547933 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:23:22.547943 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:23:22.547953 | orchestrator | changed: [testbed-manager] 2026-02-15 05:23:22.547962 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:23:22.547973 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:23:22.547983 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:23:22.547992 | orchestrator | 2026-02-15 05:23:22.548002 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-02-15 05:23:22.548012 | orchestrator | Sunday 15 February 2026 05:23:06 +0000 (0:00:35.794) 0:02:32.806 ******* 2026-02-15 05:23:22.548022 | orchestrator | ok: [testbed-manager] 2026-02-15 05:23:22.548033 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:23:22.548042 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:23:22.548052 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:23:22.548061 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:23:22.548070 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:23:22.548080 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:23:22.548089 | orchestrator | 2026-02-15 05:23:22.548099 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-02-15 05:23:22.548109 | orchestrator | Sunday 15 February 2026 05:23:09 +0000 (0:00:03.046) 0:02:35.853 ******* 2026-02-15 05:23:22.548118 | orchestrator | changed: [testbed-manager] 2026-02-15 05:23:22.548128 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:23:22.548137 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:23:22.548147 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:23:22.548156 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:23:22.548166 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:23:22.548175 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:23:22.548184 | orchestrator | 2026-02-15 05:23:22.548194 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:23:22.548205 | orchestrator | testbed-manager : ok=22  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:23:22.548232 | orchestrator | testbed-node-0 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:23:22.548264 | orchestrator | testbed-node-1 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:23:22.548276 | orchestrator | testbed-node-2 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:23:22.548287 | orchestrator | testbed-node-3 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:23:22.548299 | orchestrator | testbed-node-4 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:23:22.548309 | orchestrator | testbed-node-5 : ok=18  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:23:22.548326 | orchestrator | 2026-02-15 05:23:22.548344 | orchestrator | 2026-02-15 05:23:22.548369 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:23:22.548390 | orchestrator | Sunday 15 February 2026 05:23:21 +0000 (0:00:12.600) 0:02:48.453 ******* 2026-02-15 05:23:22.548407 | orchestrator | =============================================================================== 2026-02-15 05:23:22.548424 | orchestrator | common : Restart fluentd container ------------------------------------- 39.91s 2026-02-15 05:23:22.548441 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 35.79s 2026-02-15 05:23:22.548459 | orchestrator | common : Restart cron container ---------------------------------------- 12.60s 2026-02-15 05:23:22.548477 | orchestrator | common : Copying over config.json files for services -------------------- 4.67s 2026-02-15 05:23:22.548494 | orchestrator | service-check-containers : common | Check containers -------------------- 4.53s 2026-02-15 05:23:22.548514 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.37s 2026-02-15 05:23:22.548531 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 3.84s 2026-02-15 05:23:22.548549 | orchestrator | common : Flush handlers ------------------------------------------------- 3.79s 2026-02-15 05:23:22.548568 | orchestrator | common : include_tasks -------------------------------------------------- 3.49s 2026-02-15 05:23:22.548586 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.36s 2026-02-15 05:23:22.548604 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.23s 2026-02-15 05:23:22.548622 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.21s 2026-02-15 05:23:22.548639 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.12s 2026-02-15 05:23:22.548655 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.07s 2026-02-15 05:23:22.548672 | orchestrator | common : Initializing toolbox container using normal user --------------- 3.05s 2026-02-15 05:23:22.548691 | orchestrator | common : Copying over kolla.target -------------------------------------- 3.03s 2026-02-15 05:23:22.548708 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.97s 2026-02-15 05:23:22.548772 | orchestrator | common : include_tasks -------------------------------------------------- 2.94s 2026-02-15 05:23:22.548789 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.89s 2026-02-15 05:23:22.548803 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.87s 2026-02-15 05:23:22.894282 | orchestrator | + osism apply -a upgrade loadbalancer 2026-02-15 05:23:25.134533 | orchestrator | 2026-02-15 05:23:25 | INFO  | Task 63a73328-7b13-423b-8d24-8862e7e44b55 (loadbalancer) was prepared for execution. 2026-02-15 05:23:25.134632 | orchestrator | 2026-02-15 05:23:25 | INFO  | It takes a moment until task 63a73328-7b13-423b-8d24-8862e7e44b55 (loadbalancer) has been started and output is visible here. 2026-02-15 05:24:00.259810 | orchestrator | 2026-02-15 05:24:00.259930 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 05:24:00.259971 | orchestrator | 2026-02-15 05:24:00.259984 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 05:24:00.259995 | orchestrator | Sunday 15 February 2026 05:23:31 +0000 (0:00:01.441) 0:00:01.441 ******* 2026-02-15 05:24:00.260006 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:24:00.260018 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:24:00.260029 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:24:00.260039 | orchestrator | 2026-02-15 05:24:00.260050 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 05:24:00.260061 | orchestrator | Sunday 15 February 2026 05:23:33 +0000 (0:00:02.019) 0:00:03.461 ******* 2026-02-15 05:24:00.260072 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-02-15 05:24:00.260083 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-02-15 05:24:00.260094 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-02-15 05:24:00.260104 | orchestrator | 2026-02-15 05:24:00.260115 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-02-15 05:24:00.260126 | orchestrator | 2026-02-15 05:24:00.260136 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-15 05:24:00.260147 | orchestrator | Sunday 15 February 2026 05:23:36 +0000 (0:00:03.154) 0:00:06.615 ******* 2026-02-15 05:24:00.260173 | orchestrator | included: /ansible/roles/loadbalancer/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:24:00.260185 | orchestrator | 2026-02-15 05:24:00.260196 | orchestrator | TASK [loadbalancer : Stop and remove containers for haproxy exporter containers] *** 2026-02-15 05:24:00.260206 | orchestrator | Sunday 15 February 2026 05:23:38 +0000 (0:00:01.914) 0:00:08.530 ******* 2026-02-15 05:24:00.260217 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:24:00.260228 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:24:00.260239 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:24:00.260250 | orchestrator | 2026-02-15 05:24:00.260260 | orchestrator | TASK [loadbalancer : Removing config for haproxy exporter] ********************* 2026-02-15 05:24:00.260271 | orchestrator | Sunday 15 February 2026 05:23:40 +0000 (0:00:02.051) 0:00:10.582 ******* 2026-02-15 05:24:00.260282 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:24:00.260293 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:24:00.260303 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:24:00.260314 | orchestrator | 2026-02-15 05:24:00.260324 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-02-15 05:24:00.260335 | orchestrator | Sunday 15 February 2026 05:23:42 +0000 (0:00:02.066) 0:00:12.648 ******* 2026-02-15 05:24:00.260346 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:24:00.260356 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:24:00.260367 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:24:00.260377 | orchestrator | 2026-02-15 05:24:00.260388 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-02-15 05:24:00.260399 | orchestrator | Sunday 15 February 2026 05:23:44 +0000 (0:00:01.674) 0:00:14.323 ******* 2026-02-15 05:24:00.260410 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:24:00.260420 | orchestrator | 2026-02-15 05:24:00.260431 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-02-15 05:24:00.260442 | orchestrator | Sunday 15 February 2026 05:23:46 +0000 (0:00:02.044) 0:00:16.368 ******* 2026-02-15 05:24:00.260452 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:24:00.260463 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:24:00.260473 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:24:00.260484 | orchestrator | 2026-02-15 05:24:00.260494 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-02-15 05:24:00.260505 | orchestrator | Sunday 15 February 2026 05:23:48 +0000 (0:00:01.731) 0:00:18.099 ******* 2026-02-15 05:24:00.260516 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-15 05:24:00.260526 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-15 05:24:00.260545 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-02-15 05:24:00.260556 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-15 05:24:00.260567 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-15 05:24:00.260577 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-02-15 05:24:00.260589 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-15 05:24:00.260600 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-15 05:24:00.260611 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-02-15 05:24:00.260622 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-15 05:24:00.260633 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-15 05:24:00.260643 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-02-15 05:24:00.260654 | orchestrator | 2026-02-15 05:24:00.260665 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-15 05:24:00.260675 | orchestrator | Sunday 15 February 2026 05:23:51 +0000 (0:00:03.126) 0:00:21.226 ******* 2026-02-15 05:24:00.260686 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-15 05:24:00.260697 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-15 05:24:00.260728 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-15 05:24:00.260740 | orchestrator | 2026-02-15 05:24:00.260751 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-15 05:24:00.260779 | orchestrator | Sunday 15 February 2026 05:23:53 +0000 (0:00:01.998) 0:00:23.225 ******* 2026-02-15 05:24:00.260791 | orchestrator | ok: [testbed-node-1] => (item=ip_vs) 2026-02-15 05:24:00.260802 | orchestrator | ok: [testbed-node-0] => (item=ip_vs) 2026-02-15 05:24:00.260813 | orchestrator | ok: [testbed-node-2] => (item=ip_vs) 2026-02-15 05:24:00.260823 | orchestrator | 2026-02-15 05:24:00.260834 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-15 05:24:00.260845 | orchestrator | Sunday 15 February 2026 05:23:55 +0000 (0:00:02.225) 0:00:25.450 ******* 2026-02-15 05:24:00.260856 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-02-15 05:24:00.260867 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:24:00.260878 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-02-15 05:24:00.260889 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:24:00.260899 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-02-15 05:24:00.260910 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:24:00.260920 | orchestrator | 2026-02-15 05:24:00.260931 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-02-15 05:24:00.260942 | orchestrator | Sunday 15 February 2026 05:23:57 +0000 (0:00:01.931) 0:00:27.382 ******* 2026-02-15 05:24:00.260961 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:00.260979 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:00.260998 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:00.261010 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:00.261021 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:00.261040 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:11.380191 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:24:11.380311 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:24:11.380347 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:24:11.380360 | orchestrator | 2026-02-15 05:24:11.380373 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-02-15 05:24:11.380385 | orchestrator | Sunday 15 February 2026 05:24:00 +0000 (0:00:02.743) 0:00:30.126 ******* 2026-02-15 05:24:11.380396 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:24:11.380408 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:24:11.380418 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:24:11.380429 | orchestrator | 2026-02-15 05:24:11.380440 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-02-15 05:24:11.380451 | orchestrator | Sunday 15 February 2026 05:24:02 +0000 (0:00:01.969) 0:00:32.095 ******* 2026-02-15 05:24:11.380462 | orchestrator | ok: [testbed-node-0] => (item=users) 2026-02-15 05:24:11.380474 | orchestrator | ok: [testbed-node-1] => (item=users) 2026-02-15 05:24:11.380484 | orchestrator | ok: [testbed-node-2] => (item=users) 2026-02-15 05:24:11.380495 | orchestrator | ok: [testbed-node-0] => (item=rules) 2026-02-15 05:24:11.380506 | orchestrator | ok: [testbed-node-1] => (item=rules) 2026-02-15 05:24:11.380517 | orchestrator | ok: [testbed-node-2] => (item=rules) 2026-02-15 05:24:11.380528 | orchestrator | 2026-02-15 05:24:11.380539 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-02-15 05:24:11.380549 | orchestrator | Sunday 15 February 2026 05:24:05 +0000 (0:00:02.862) 0:00:34.958 ******* 2026-02-15 05:24:11.380560 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:24:11.380571 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:24:11.380581 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:24:11.380592 | orchestrator | 2026-02-15 05:24:11.380603 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-02-15 05:24:11.380613 | orchestrator | Sunday 15 February 2026 05:24:07 +0000 (0:00:02.240) 0:00:37.198 ******* 2026-02-15 05:24:11.380624 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:24:11.380635 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:24:11.380645 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:24:11.380656 | orchestrator | 2026-02-15 05:24:11.380666 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-02-15 05:24:11.380677 | orchestrator | Sunday 15 February 2026 05:24:09 +0000 (0:00:02.254) 0:00:39.453 ******* 2026-02-15 05:24:11.380689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 05:24:11.380748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:24:11.380778 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:24:11.380794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 05:24:11.380806 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:24:11.380818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 05:24:11.380830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:24:11.380841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:24:11.380852 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 05:24:11.380869 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:24:11.380893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 05:24:15.724238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:24:15.724343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:24:15.724359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 05:24:15.724372 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:24:15.724386 | orchestrator | 2026-02-15 05:24:15.724398 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-02-15 05:24:15.724411 | orchestrator | Sunday 15 February 2026 05:24:11 +0000 (0:00:01.792) 0:00:41.246 ******* 2026-02-15 05:24:15.724422 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:15.724434 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:15.724473 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:15.724503 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:15.724535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:24:15.724547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 05:24:15.724559 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:15.724587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:24:15.724607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 05:24:15.724631 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:30.592980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:24:30.593063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy-ssh:9.6.20251208', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4', '__omit_place_holder__14487dab6e8f6322a2fcd5247465596697d5fee4'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-02-15 05:24:30.593069 | orchestrator | 2026-02-15 05:24:30.593074 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-02-15 05:24:30.593079 | orchestrator | Sunday 15 February 2026 05:24:15 +0000 (0:00:04.345) 0:00:45.591 ******* 2026-02-15 05:24:30.593084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:30.593089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:30.593104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:30.593112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:30.593126 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:30.593130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:30.593134 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:24:30.593138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:24:30.593145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:24:30.593149 | orchestrator | 2026-02-15 05:24:30.593153 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-02-15 05:24:30.593157 | orchestrator | Sunday 15 February 2026 05:24:21 +0000 (0:00:05.721) 0:00:51.312 ******* 2026-02-15 05:24:30.593161 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-15 05:24:30.593166 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-15 05:24:30.593170 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-02-15 05:24:30.593174 | orchestrator | 2026-02-15 05:24:30.593177 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-02-15 05:24:30.593181 | orchestrator | Sunday 15 February 2026 05:24:24 +0000 (0:00:02.894) 0:00:54.207 ******* 2026-02-15 05:24:30.593186 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-15 05:24:30.593190 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-15 05:24:30.593194 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-02-15 05:24:30.593198 | orchestrator | 2026-02-15 05:24:30.593201 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-02-15 05:24:30.593205 | orchestrator | Sunday 15 February 2026 05:24:28 +0000 (0:00:04.337) 0:00:58.545 ******* 2026-02-15 05:24:30.593209 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:24:30.593214 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:24:30.593220 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:24:51.918347 | orchestrator | 2026-02-15 05:24:51.918432 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-02-15 05:24:51.918440 | orchestrator | Sunday 15 February 2026 05:24:30 +0000 (0:00:01.913) 0:01:00.458 ******* 2026-02-15 05:24:51.918445 | orchestrator | ok: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-15 05:24:51.918450 | orchestrator | ok: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-15 05:24:51.918454 | orchestrator | ok: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-02-15 05:24:51.918458 | orchestrator | 2026-02-15 05:24:51.918463 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-02-15 05:24:51.918467 | orchestrator | Sunday 15 February 2026 05:24:33 +0000 (0:00:03.022) 0:01:03.482 ******* 2026-02-15 05:24:51.918471 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-15 05:24:51.918476 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-15 05:24:51.918480 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-02-15 05:24:51.918502 | orchestrator | 2026-02-15 05:24:51.918506 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-02-15 05:24:51.918510 | orchestrator | Sunday 15 February 2026 05:24:36 +0000 (0:00:02.766) 0:01:06.249 ******* 2026-02-15 05:24:51.918515 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:24:51.918519 | orchestrator | 2026-02-15 05:24:51.918522 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-02-15 05:24:51.918526 | orchestrator | Sunday 15 February 2026 05:24:38 +0000 (0:00:01.914) 0:01:08.163 ******* 2026-02-15 05:24:51.918531 | orchestrator | ok: [testbed-node-0] => (item=haproxy.pem) 2026-02-15 05:24:51.918536 | orchestrator | ok: [testbed-node-1] => (item=haproxy.pem) 2026-02-15 05:24:51.918539 | orchestrator | ok: [testbed-node-2] => (item=haproxy.pem) 2026-02-15 05:24:51.918543 | orchestrator | 2026-02-15 05:24:51.918547 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-02-15 05:24:51.918551 | orchestrator | Sunday 15 February 2026 05:24:41 +0000 (0:00:02.727) 0:01:10.890 ******* 2026-02-15 05:24:51.918555 | orchestrator | ok: [testbed-node-0] => (item=haproxy-internal.pem) 2026-02-15 05:24:51.918559 | orchestrator | ok: [testbed-node-1] => (item=haproxy-internal.pem) 2026-02-15 05:24:51.918563 | orchestrator | ok: [testbed-node-2] => (item=haproxy-internal.pem) 2026-02-15 05:24:51.918567 | orchestrator | 2026-02-15 05:24:51.918571 | orchestrator | TASK [loadbalancer : Copying over proxysql-cert.pem] *************************** 2026-02-15 05:24:51.918575 | orchestrator | Sunday 15 February 2026 05:24:43 +0000 (0:00:02.643) 0:01:13.534 ******* 2026-02-15 05:24:51.918579 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:24:51.918583 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:24:51.918587 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:24:51.918591 | orchestrator | 2026-02-15 05:24:51.918595 | orchestrator | TASK [loadbalancer : Copying over proxysql-key.pem] **************************** 2026-02-15 05:24:51.918598 | orchestrator | Sunday 15 February 2026 05:24:44 +0000 (0:00:01.345) 0:01:14.879 ******* 2026-02-15 05:24:51.918602 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:24:51.918606 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:24:51.918611 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:24:51.918615 | orchestrator | 2026-02-15 05:24:51.918618 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-15 05:24:51.918622 | orchestrator | Sunday 15 February 2026 05:24:47 +0000 (0:00:02.038) 0:01:16.917 ******* 2026-02-15 05:24:51.918628 | orchestrator | ok: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:51.918644 | orchestrator | ok: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:51.918658 | orchestrator | ok: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 05:24:51.918666 | orchestrator | ok: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:51.918670 | orchestrator | ok: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:51.918675 | orchestrator | ok: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:24:51.918710 | orchestrator | ok: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:24:51.918718 | orchestrator | ok: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:24:51.918726 | orchestrator | ok: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:24:55.861417 | orchestrator | 2026-02-15 05:24:55.861524 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-15 05:24:55.861542 | orchestrator | Sunday 15 February 2026 05:24:51 +0000 (0:00:04.873) 0:01:21.791 ******* 2026-02-15 05:24:55.861557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 05:24:55.861573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:24:55.861585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:24:55.861597 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:24:55.861610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 05:24:55.861621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:24:55.861649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:24:55.861733 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:24:55.861765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 05:24:55.861778 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:24:55.861789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:24:55.861800 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:24:55.861811 | orchestrator | 2026-02-15 05:24:55.861822 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-15 05:24:55.861833 | orchestrator | Sunday 15 February 2026 05:24:53 +0000 (0:00:01.670) 0:01:23.462 ******* 2026-02-15 05:24:55.861845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 05:24:55.861856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:24:55.861873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:24:55.861893 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:24:55.861913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 05:25:07.421336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:25:07.421454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:25:07.421472 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:25:07.421486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 05:25:07.421499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:25:07.421511 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:25:07.421543 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:25:07.421555 | orchestrator | 2026-02-15 05:25:07.421582 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-02-15 05:25:07.421594 | orchestrator | Sunday 15 February 2026 05:24:55 +0000 (0:00:02.265) 0:01:25.727 ******* 2026-02-15 05:25:07.421605 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-15 05:25:07.421617 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-15 05:25:07.421628 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-02-15 05:25:07.421638 | orchestrator | 2026-02-15 05:25:07.421649 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-02-15 05:25:07.421660 | orchestrator | Sunday 15 February 2026 05:24:58 +0000 (0:00:02.532) 0:01:28.259 ******* 2026-02-15 05:25:07.421761 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-15 05:25:07.421782 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-15 05:25:07.421793 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-02-15 05:25:07.421804 | orchestrator | 2026-02-15 05:25:07.421833 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-02-15 05:25:07.421845 | orchestrator | Sunday 15 February 2026 05:25:00 +0000 (0:00:02.525) 0:01:30.785 ******* 2026-02-15 05:25:07.421857 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-15 05:25:07.421868 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-15 05:25:07.421882 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-02-15 05:25:07.421895 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-15 05:25:07.421908 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:25:07.421922 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-15 05:25:07.421935 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:25:07.421948 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-02-15 05:25:07.421960 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:25:07.421973 | orchestrator | 2026-02-15 05:25:07.421987 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-15 05:25:07.421999 | orchestrator | Sunday 15 February 2026 05:25:03 +0000 (0:00:02.497) 0:01:33.282 ******* 2026-02-15 05:25:07.422013 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 05:25:07.422090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 05:25:07.422115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 05:25:07.422135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:25:07.422157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:25:11.502614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:25:11.502809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:25:11.502827 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:25:11.502864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:25:11.502876 | orchestrator | 2026-02-15 05:25:11.502888 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-15 05:25:11.502899 | orchestrator | Sunday 15 February 2026 05:25:07 +0000 (0:00:04.006) 0:01:37.289 ******* 2026-02-15 05:25:11.502910 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:25:11.502920 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:25:11.502931 | orchestrator | } 2026-02-15 05:25:11.502941 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:25:11.502950 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:25:11.502960 | orchestrator | } 2026-02-15 05:25:11.502970 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:25:11.502979 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:25:11.502989 | orchestrator | } 2026-02-15 05:25:11.502999 | orchestrator | 2026-02-15 05:25:11.503009 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:25:11.503018 | orchestrator | Sunday 15 February 2026 05:25:08 +0000 (0:00:01.532) 0:01:38.823 ******* 2026-02-15 05:25:11.503029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 05:25:11.503057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:25:11.503068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:25:11.503078 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:25:11.503088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 05:25:11.503124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:25:11.503136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:25:11.503148 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:25:11.503163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 05:25:11.503175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:25:11.503195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:25:17.327063 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:25:17.327142 | orchestrator | 2026-02-15 05:25:17.327151 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-02-15 05:25:17.327158 | orchestrator | Sunday 15 February 2026 05:25:11 +0000 (0:00:02.538) 0:01:41.361 ******* 2026-02-15 05:25:17.327164 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:25:17.327188 | orchestrator | 2026-02-15 05:25:17.327195 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-02-15 05:25:17.327200 | orchestrator | Sunday 15 February 2026 05:25:13 +0000 (0:00:02.041) 0:01:43.403 ******* 2026-02-15 05:25:17.327209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:25:17.327219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 05:25:17.327237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:17.327244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 05:25:17.327261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:25:17.327273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:25:17.327289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 05:25:17.327296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 05:25:17.327305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:17.327312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:17.327321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 05:25:19.209009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 05:25:19.209119 | orchestrator | 2026-02-15 05:25:19.209136 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-02-15 05:25:19.209149 | orchestrator | Sunday 15 February 2026 05:25:18 +0000 (0:00:04.935) 0:01:48.339 ******* 2026-02-15 05:25:19.209163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:25:19.209179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 05:25:19.209210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:19.209223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 05:25:19.209255 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:25:19.209286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:25:19.209300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 05:25:19.209311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:19.209323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 05:25:19.209335 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:25:19.209351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-api:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:25:19.209371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-evaluator:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-02-15 05:25:19.209389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-listener:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:34.562982 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/aodh-notifier:20.0.0.20251208', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-02-15 05:25:34.563102 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:25:34.563120 | orchestrator | 2026-02-15 05:25:34.563129 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-02-15 05:25:34.563137 | orchestrator | Sunday 15 February 2026 05:25:20 +0000 (0:00:01.861) 0:01:50.201 ******* 2026-02-15 05:25:34.563145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:34.563165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:34.563174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:34.563182 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:25:34.563190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:34.563196 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:25:34.563211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:34.563218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:34.563225 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:25:34.563250 | orchestrator | 2026-02-15 05:25:34.563257 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-02-15 05:25:34.563264 | orchestrator | Sunday 15 February 2026 05:25:22 +0000 (0:00:02.429) 0:01:52.631 ******* 2026-02-15 05:25:34.563271 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:25:34.563278 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:25:34.563285 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:25:34.563292 | orchestrator | 2026-02-15 05:25:34.563298 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-02-15 05:25:34.563305 | orchestrator | Sunday 15 February 2026 05:25:25 +0000 (0:00:02.436) 0:01:55.068 ******* 2026-02-15 05:25:34.563312 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:25:34.563318 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:25:34.563325 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:25:34.563331 | orchestrator | 2026-02-15 05:25:34.563338 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-02-15 05:25:34.563345 | orchestrator | Sunday 15 February 2026 05:25:28 +0000 (0:00:02.929) 0:01:57.997 ******* 2026-02-15 05:25:34.563352 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:25:34.563359 | orchestrator | 2026-02-15 05:25:34.563365 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-02-15 05:25:34.563372 | orchestrator | Sunday 15 February 2026 05:25:29 +0000 (0:00:01.794) 0:01:59.792 ******* 2026-02-15 05:25:34.563396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:25:34.563406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:34.563414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:25:34.563426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:25:34.563439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:34.563447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:25:34.563460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:25:36.216861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:36.217058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:25:36.217077 | orchestrator | 2026-02-15 05:25:36.217088 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-02-15 05:25:36.217098 | orchestrator | Sunday 15 February 2026 05:25:34 +0000 (0:00:04.639) 0:02:04.432 ******* 2026-02-15 05:25:36.217109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:25:36.217120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:36.217129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:25:36.217137 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:25:36.217162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:25:36.217182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:36.217190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:25:36.217199 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:25:36.217207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-api:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:25:36.217216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-keystone-listener:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-02-15 05:25:36.217230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/barbican-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:25:52.854270 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:25:52.854394 | orchestrator | 2026-02-15 05:25:52.854412 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-02-15 05:25:52.854426 | orchestrator | Sunday 15 February 2026 05:25:36 +0000 (0:00:01.653) 0:02:06.085 ******* 2026-02-15 05:25:52.854438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:52.854477 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:52.854490 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:25:52.854502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:52.854513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:52.854525 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:25:52.854536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:52.854547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:25:52.854558 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:25:52.854568 | orchestrator | 2026-02-15 05:25:52.854579 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-02-15 05:25:52.854590 | orchestrator | Sunday 15 February 2026 05:25:38 +0000 (0:00:01.861) 0:02:07.947 ******* 2026-02-15 05:25:52.854601 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:25:52.854613 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:25:52.854623 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:25:52.854634 | orchestrator | 2026-02-15 05:25:52.854645 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-02-15 05:25:52.854700 | orchestrator | Sunday 15 February 2026 05:25:40 +0000 (0:00:02.276) 0:02:10.223 ******* 2026-02-15 05:25:52.854712 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:25:52.854723 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:25:52.854734 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:25:52.854745 | orchestrator | 2026-02-15 05:25:52.854756 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-02-15 05:25:52.854766 | orchestrator | Sunday 15 February 2026 05:25:43 +0000 (0:00:02.979) 0:02:13.202 ******* 2026-02-15 05:25:52.854777 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:25:52.854789 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:25:52.854799 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:25:52.854810 | orchestrator | 2026-02-15 05:25:52.854821 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-02-15 05:25:52.854832 | orchestrator | Sunday 15 February 2026 05:25:44 +0000 (0:00:01.429) 0:02:14.632 ******* 2026-02-15 05:25:52.854864 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:25:52.854876 | orchestrator | 2026-02-15 05:25:52.854887 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-02-15 05:25:52.854897 | orchestrator | Sunday 15 February 2026 05:25:46 +0000 (0:00:01.755) 0:02:16.388 ******* 2026-02-15 05:25:52.854910 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-15 05:25:52.854948 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-15 05:25:52.854961 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2026-02-15 05:25:52.854976 | orchestrator | 2026-02-15 05:25:52.854994 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-02-15 05:25:52.855011 | orchestrator | Sunday 15 February 2026 05:25:50 +0000 (0:00:03.660) 0:02:20.048 ******* 2026-02-15 05:25:52.855027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-15 05:25:52.855048 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:25:52.855068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-15 05:25:52.855080 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:25:52.855099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2026-02-15 05:26:05.042765 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:05.042920 | orchestrator | 2026-02-15 05:26:05.042941 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-02-15 05:26:05.042980 | orchestrator | Sunday 15 February 2026 05:25:52 +0000 (0:00:02.675) 0:02:22.723 ******* 2026-02-15 05:26:05.043000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 05:26:05.043015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 05:26:05.043030 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:05.043042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 05:26:05.043053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 05:26:05.043065 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:26:05.043100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 05:26:05.043113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2026-02-15 05:26:05.043124 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:05.043135 | orchestrator | 2026-02-15 05:26:05.043146 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-02-15 05:26:05.043157 | orchestrator | Sunday 15 February 2026 05:25:55 +0000 (0:00:02.853) 0:02:25.577 ******* 2026-02-15 05:26:05.043168 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:05.043179 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:26:05.043190 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:05.043200 | orchestrator | 2026-02-15 05:26:05.043211 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-02-15 05:26:05.043222 | orchestrator | Sunday 15 February 2026 05:25:57 +0000 (0:00:01.527) 0:02:27.104 ******* 2026-02-15 05:26:05.043233 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:05.043244 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:26:05.043255 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:05.043271 | orchestrator | 2026-02-15 05:26:05.043290 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-02-15 05:26:05.043306 | orchestrator | Sunday 15 February 2026 05:25:59 +0000 (0:00:02.356) 0:02:29.460 ******* 2026-02-15 05:26:05.043322 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:26:05.043339 | orchestrator | 2026-02-15 05:26:05.043356 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-02-15 05:26:05.043373 | orchestrator | Sunday 15 February 2026 05:26:01 +0000 (0:00:01.741) 0:02:31.202 ******* 2026-02-15 05:26:05.043434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:26:05.043462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:26:05.043498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 05:26:05.043520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 05:26:05.043542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:26:05.043574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:26:07.126264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 05:26:07.126389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 05:26:07.126407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:26:07.126421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:26:07.126432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 05:26:07.126472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 05:26:07.126483 | orchestrator | 2026-02-15 05:26:07.126495 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-02-15 05:26:07.126515 | orchestrator | Sunday 15 February 2026 05:26:06 +0000 (0:00:04.829) 0:02:36.031 ******* 2026-02-15 05:26:07.126527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:26:07.126538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:26:07.126548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 05:26:07.126558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 05:26:07.126567 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:07.126591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:26:18.596783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:26:18.596895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 05:26:18.596910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 05:26:18.596923 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:26:18.596937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-api:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'wsgi': 'cinder.wsgi.api:application', 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:26:18.596965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-scheduler:26.2.1.20251208', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:26:18.597019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-volume:26.2.1.20251208', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-02-15 05:26:18.597031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/cinder-backup:26.2.1.20251208', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-02-15 05:26:18.597041 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:18.597051 | orchestrator | 2026-02-15 05:26:18.597062 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-02-15 05:26:18.597073 | orchestrator | Sunday 15 February 2026 05:26:08 +0000 (0:00:02.062) 0:02:38.094 ******* 2026-02-15 05:26:18.597083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:18.597095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:18.597106 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:18.597116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:18.597126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:18.597135 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:26:18.597145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:18.597155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:18.597173 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:18.597182 | orchestrator | 2026-02-15 05:26:18.597192 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-02-15 05:26:18.597202 | orchestrator | Sunday 15 February 2026 05:26:10 +0000 (0:00:02.080) 0:02:40.175 ******* 2026-02-15 05:26:18.597216 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:26:18.597227 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:26:18.597236 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:26:18.597246 | orchestrator | 2026-02-15 05:26:18.597255 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-02-15 05:26:18.597267 | orchestrator | Sunday 15 February 2026 05:26:12 +0000 (0:00:02.374) 0:02:42.549 ******* 2026-02-15 05:26:18.597278 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:26:18.597289 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:26:18.597300 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:26:18.597311 | orchestrator | 2026-02-15 05:26:18.597321 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-02-15 05:26:18.597332 | orchestrator | Sunday 15 February 2026 05:26:15 +0000 (0:00:02.929) 0:02:45.479 ******* 2026-02-15 05:26:18.597343 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:18.597355 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:26:18.597366 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:18.597377 | orchestrator | 2026-02-15 05:26:18.597388 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-02-15 05:26:18.597399 | orchestrator | Sunday 15 February 2026 05:26:17 +0000 (0:00:01.569) 0:02:47.048 ******* 2026-02-15 05:26:18.597410 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:18.597421 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:26:18.597439 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:24.444412 | orchestrator | 2026-02-15 05:26:24.444531 | orchestrator | TASK [include_role : designate] ************************************************ 2026-02-15 05:26:24.444549 | orchestrator | Sunday 15 February 2026 05:26:18 +0000 (0:00:01.417) 0:02:48.466 ******* 2026-02-15 05:26:24.444561 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:26:24.444572 | orchestrator | 2026-02-15 05:26:24.444583 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-02-15 05:26:24.444594 | orchestrator | Sunday 15 February 2026 05:26:20 +0000 (0:00:01.897) 0:02:50.363 ******* 2026-02-15 05:26:24.444611 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:26:24.444629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 05:26:24.444756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 05:26:24.444798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 05:26:24.444819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 05:26:24.444853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:26:24.444865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 05:26:24.444876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:26:24.444898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:26:24.444916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 05:26:24.444937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 05:26:26.397276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 05:26:26.397377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 05:26:26.397420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 05:26:26.397434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 05:26:26.397459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 05:26:26.397471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 05:26:26.397502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:26:26.397514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 05:26:26.397525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:26:26.397544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 05:26:26.397555 | orchestrator | 2026-02-15 05:26:26.397569 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-02-15 05:26:26.397581 | orchestrator | Sunday 15 February 2026 05:26:25 +0000 (0:00:05.197) 0:02:55.561 ******* 2026-02-15 05:26:26.397599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:26:26.397614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 05:26:26.397688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 05:26:27.700546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 05:26:27.700798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 05:26:27.700827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:26:27.700856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 05:26:27.700869 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:27.700884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:26:27.700921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 05:26:27.700942 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 05:26:27.700961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 05:26:27.700979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 05:26:27.700999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:26:27.701019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-api:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:26:27.701049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 05:26:43.136411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-backend-bind9:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-02-15 05:26:43.136549 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:26:43.136582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-central:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-02-15 05:26:43.136603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-mdns:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-02-15 05:26:43.136653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-producer:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-02-15 05:26:43.136674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-worker:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:26:43.136705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/designate-sink:20.0.1.20251208', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-02-15 05:26:43.136754 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:43.136768 | orchestrator | 2026-02-15 05:26:43.136780 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-02-15 05:26:43.136792 | orchestrator | Sunday 15 February 2026 05:26:27 +0000 (0:00:02.014) 0:02:57.576 ******* 2026-02-15 05:26:43.136827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:43.136842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:43.136855 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:43.136866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:43.136877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:43.136888 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:26:43.136899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:43.136910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:26:43.136920 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:43.136931 | orchestrator | 2026-02-15 05:26:43.136943 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-02-15 05:26:43.136953 | orchestrator | Sunday 15 February 2026 05:26:29 +0000 (0:00:02.187) 0:02:59.763 ******* 2026-02-15 05:26:43.136964 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:26:43.136975 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:26:43.136986 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:26:43.136996 | orchestrator | 2026-02-15 05:26:43.137007 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-02-15 05:26:43.137018 | orchestrator | Sunday 15 February 2026 05:26:32 +0000 (0:00:02.324) 0:03:02.088 ******* 2026-02-15 05:26:43.137029 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:26:43.137040 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:26:43.137050 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:26:43.137061 | orchestrator | 2026-02-15 05:26:43.137071 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-02-15 05:26:43.137082 | orchestrator | Sunday 15 February 2026 05:26:35 +0000 (0:00:02.883) 0:03:04.971 ******* 2026-02-15 05:26:43.137093 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:43.137103 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:26:43.137114 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:26:43.137124 | orchestrator | 2026-02-15 05:26:43.137135 | orchestrator | TASK [include_role : glance] *************************************************** 2026-02-15 05:26:43.137146 | orchestrator | Sunday 15 February 2026 05:26:36 +0000 (0:00:01.396) 0:03:06.368 ******* 2026-02-15 05:26:43.137156 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:26:43.137176 | orchestrator | 2026-02-15 05:26:43.137187 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-02-15 05:26:43.137197 | orchestrator | Sunday 15 February 2026 05:26:38 +0000 (0:00:01.931) 0:03:08.300 ******* 2026-02-15 05:26:43.137226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 05:26:44.235562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 05:26:44.235761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 05:26:44.235839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 05:26:44.235865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-02-15 05:26:44.235923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 05:26:47.666433 | orchestrator | 2026-02-15 05:26:47.666540 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-02-15 05:26:47.666556 | orchestrator | Sunday 15 February 2026 05:26:44 +0000 (0:00:05.811) 0:03:14.112 ******* 2026-02-15 05:26:47.666574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 05:26:47.666735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 05:26:47.666758 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:26:47.666794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 05:26:47.666823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/glance-api:30.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-02-15 05:26:47.666846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 05:27:06.430201 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:06.430342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/glance-tls-proxy:30.0.1.20251208', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-02-15 05:27:06.430363 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:06.430376 | orchestrator | 2026-02-15 05:27:06.430388 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-02-15 05:27:06.430400 | orchestrator | Sunday 15 February 2026 05:26:48 +0000 (0:00:04.550) 0:03:18.662 ******* 2026-02-15 05:27:06.430413 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 05:27:06.430425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 05:27:06.430456 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:06.430468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 05:27:06.430498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 05:27:06.430509 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:06.430521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 05:27:06.430537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h', 'option httpchk'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-02-15 05:27:06.430549 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:06.430559 | orchestrator | 2026-02-15 05:27:06.430570 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-02-15 05:27:06.430580 | orchestrator | Sunday 15 February 2026 05:26:53 +0000 (0:00:04.705) 0:03:23.367 ******* 2026-02-15 05:27:06.430591 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:27:06.430603 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:27:06.430677 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:27:06.430690 | orchestrator | 2026-02-15 05:27:06.430701 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-02-15 05:27:06.430712 | orchestrator | Sunday 15 February 2026 05:26:55 +0000 (0:00:02.374) 0:03:25.742 ******* 2026-02-15 05:27:06.430722 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:27:06.430734 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:27:06.430746 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:27:06.430757 | orchestrator | 2026-02-15 05:27:06.430768 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-02-15 05:27:06.430780 | orchestrator | Sunday 15 February 2026 05:26:58 +0000 (0:00:02.877) 0:03:28.619 ******* 2026-02-15 05:27:06.430791 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:06.430803 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:06.430815 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:06.430838 | orchestrator | 2026-02-15 05:27:06.430850 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-02-15 05:27:06.430862 | orchestrator | Sunday 15 February 2026 05:27:00 +0000 (0:00:01.414) 0:03:30.034 ******* 2026-02-15 05:27:06.430874 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:27:06.430885 | orchestrator | 2026-02-15 05:27:06.430896 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-02-15 05:27:06.430908 | orchestrator | Sunday 15 February 2026 05:27:01 +0000 (0:00:01.678) 0:03:31.712 ******* 2026-02-15 05:27:06.430921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:27:06.430945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:27:22.971094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:27:22.971211 | orchestrator | 2026-02-15 05:27:22.971244 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-02-15 05:27:22.971258 | orchestrator | Sunday 15 February 2026 05:27:06 +0000 (0:00:04.589) 0:03:36.301 ******* 2026-02-15 05:27:22.971271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:27:22.971306 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:22.971320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:27:22.971337 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:22.971356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/grafana:12.3.0.20251208', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:27:22.971375 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:22.971393 | orchestrator | 2026-02-15 05:27:22.971411 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-02-15 05:27:22.971426 | orchestrator | Sunday 15 February 2026 05:27:08 +0000 (0:00:01.779) 0:03:38.081 ******* 2026-02-15 05:27:22.971439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:27:22.971453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:27:22.971484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:27:22.971495 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:22.971507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:27:22.971518 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:22.971529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:27:22.971547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:27:22.971558 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:22.971569 | orchestrator | 2026-02-15 05:27:22.971580 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-02-15 05:27:22.971591 | orchestrator | Sunday 15 February 2026 05:27:09 +0000 (0:00:01.524) 0:03:39.605 ******* 2026-02-15 05:27:22.971645 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:27:22.971660 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:27:22.971673 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:27:22.971685 | orchestrator | 2026-02-15 05:27:22.971697 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-02-15 05:27:22.971710 | orchestrator | Sunday 15 February 2026 05:27:11 +0000 (0:00:02.241) 0:03:41.846 ******* 2026-02-15 05:27:22.971723 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:27:22.971733 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:27:22.971744 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:27:22.971754 | orchestrator | 2026-02-15 05:27:22.971765 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-02-15 05:27:22.971776 | orchestrator | Sunday 15 February 2026 05:27:14 +0000 (0:00:02.907) 0:03:44.754 ******* 2026-02-15 05:27:22.971787 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:22.971797 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:22.971808 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:22.971819 | orchestrator | 2026-02-15 05:27:22.971830 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-02-15 05:27:22.971840 | orchestrator | Sunday 15 February 2026 05:27:16 +0000 (0:00:01.382) 0:03:46.137 ******* 2026-02-15 05:27:22.971851 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:27:22.971862 | orchestrator | 2026-02-15 05:27:22.971872 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-02-15 05:27:22.971883 | orchestrator | Sunday 15 February 2026 05:27:18 +0000 (0:00:01.764) 0:03:47.902 ******* 2026-02-15 05:27:22.971908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 05:27:24.971473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 05:27:24.971700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-02-15 05:27:24.971734 | orchestrator | 2026-02-15 05:27:24.971755 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-02-15 05:27:24.971769 | orchestrator | Sunday 15 February 2026 05:27:22 +0000 (0:00:04.940) 0:03:52.843 ******* 2026-02-15 05:27:24.971782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 05:27:24.971802 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:24.971846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 05:27:33.997286 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:33.997409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/horizon:25.3.2.20251208', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_VENUS': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-02-15 05:27:33.997432 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:33.997445 | orchestrator | 2026-02-15 05:27:33.997458 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-02-15 05:27:33.997470 | orchestrator | Sunday 15 February 2026 05:27:24 +0000 (0:00:01.996) 0:03:54.839 ******* 2026-02-15 05:27:33.997483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-15 05:27:33.997522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 05:27:33.997538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-15 05:27:33.997551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 05:27:33.997580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-15 05:27:33.997591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-15 05:27:33.997672 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:33.997687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 05:27:33.997698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-15 05:27:33.997806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 05:27:33.997839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-15 05:27:33.997861 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:33.997881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-15 05:27:33.997901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 05:27:33.997924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin', 'option httpchk'], 'tls_backend': 'no'}})  2026-02-15 05:27:33.997958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-02-15 05:27:33.997980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-02-15 05:27:33.998001 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:33.998132 | orchestrator | 2026-02-15 05:27:33.998156 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-02-15 05:27:33.998175 | orchestrator | Sunday 15 February 2026 05:27:26 +0000 (0:00:01.985) 0:03:56.825 ******* 2026-02-15 05:27:33.998193 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:27:33.998212 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:27:33.998230 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:27:33.998248 | orchestrator | 2026-02-15 05:27:33.998266 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-02-15 05:27:33.998284 | orchestrator | Sunday 15 February 2026 05:27:29 +0000 (0:00:02.409) 0:03:59.234 ******* 2026-02-15 05:27:33.998303 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:27:33.998334 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:27:33.998355 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:27:33.998374 | orchestrator | 2026-02-15 05:27:33.998391 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-02-15 05:27:33.998408 | orchestrator | Sunday 15 February 2026 05:27:32 +0000 (0:00:02.973) 0:04:02.207 ******* 2026-02-15 05:27:33.998421 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:33.998431 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:33.998442 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:33.998452 | orchestrator | 2026-02-15 05:27:33.998463 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-02-15 05:27:33.998474 | orchestrator | Sunday 15 February 2026 05:27:33 +0000 (0:00:01.428) 0:04:03.636 ******* 2026-02-15 05:27:33.998501 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:44.126301 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:44.126411 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:44.126423 | orchestrator | 2026-02-15 05:27:44.126433 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-02-15 05:27:44.126443 | orchestrator | Sunday 15 February 2026 05:27:35 +0000 (0:00:01.412) 0:04:05.048 ******* 2026-02-15 05:27:44.126451 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:27:44.126460 | orchestrator | 2026-02-15 05:27:44.126469 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-02-15 05:27:44.126478 | orchestrator | Sunday 15 February 2026 05:27:37 +0000 (0:00:02.041) 0:04:07.090 ******* 2026-02-15 05:27:44.126493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-15 05:27:44.126527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 05:27:44.126538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 05:27:44.126561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-15 05:27:44.126590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 05:27:44.126624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 05:27:44.126633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}}) 2026-02-15 05:27:44.126650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 05:27:44.126659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 05:27:44.126668 | orchestrator | 2026-02-15 05:27:44.126681 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-02-15 05:27:44.126690 | orchestrator | Sunday 15 February 2026 05:27:42 +0000 (0:00:04.971) 0:04:12.061 ******* 2026-02-15 05:27:44.126706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-15 05:27:45.873892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 05:27:45.874106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 05:27:45.874134 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:45.874159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-15 05:27:45.874201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 05:27:45.874224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 05:27:45.874243 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:45.874291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}}}})  2026-02-15 05:27:45.874321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-ssh:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-02-15 05:27:45.874333 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keystone-fernet:27.0.1.20251208', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-02-15 05:27:45.874344 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:45.874355 | orchestrator | 2026-02-15 05:27:45.874368 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-02-15 05:27:45.874380 | orchestrator | Sunday 15 February 2026 05:27:44 +0000 (0:00:01.932) 0:04:13.994 ******* 2026-02-15 05:27:45.874392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-15 05:27:45.874408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-15 05:27:45.874422 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:27:45.874443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-15 05:27:45.874456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-15 05:27:45.874468 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:27:45.874481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-15 05:27:45.874494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin', 'option httpchk']}})  2026-02-15 05:27:45.874517 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:27:45.874538 | orchestrator | 2026-02-15 05:27:45.874556 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-02-15 05:27:45.874586 | orchestrator | Sunday 15 February 2026 05:27:45 +0000 (0:00:01.745) 0:04:15.740 ******* 2026-02-15 05:28:01.541494 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:28:01.541644 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:28:01.541661 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:28:01.541669 | orchestrator | 2026-02-15 05:28:01.541677 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-02-15 05:28:01.541684 | orchestrator | Sunday 15 February 2026 05:27:48 +0000 (0:00:02.309) 0:04:18.049 ******* 2026-02-15 05:28:01.541691 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:28:01.541697 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:28:01.541703 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:28:01.541710 | orchestrator | 2026-02-15 05:28:01.541716 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-02-15 05:28:01.541722 | orchestrator | Sunday 15 February 2026 05:27:51 +0000 (0:00:03.225) 0:04:21.275 ******* 2026-02-15 05:28:01.541738 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:28:01.541745 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:28:01.541751 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:28:01.541758 | orchestrator | 2026-02-15 05:28:01.541764 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-02-15 05:28:01.541778 | orchestrator | Sunday 15 February 2026 05:27:52 +0000 (0:00:01.469) 0:04:22.744 ******* 2026-02-15 05:28:01.541784 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:28:01.541790 | orchestrator | 2026-02-15 05:28:01.541797 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-02-15 05:28:01.541803 | orchestrator | Sunday 15 February 2026 05:27:54 +0000 (0:00:01.850) 0:04:24.595 ******* 2026-02-15 05:28:01.541814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:28:01.541839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:28:01.541848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:28:01.541884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:28:01.541892 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:28:01.541899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:28:01.541906 | orchestrator | 2026-02-15 05:28:01.541913 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-02-15 05:28:01.541922 | orchestrator | Sunday 15 February 2026 05:27:59 +0000 (0:00:05.087) 0:04:29.683 ******* 2026-02-15 05:28:01.541938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:28:01.541964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:28:14.721222 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:28:14.721344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:28:14.721365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:28:14.721377 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:28:14.721405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-api:20.0.1.20251208', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:28:14.721441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/magnum-conductor:20.0.1.20251208', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:28:14.721454 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:28:14.721465 | orchestrator | 2026-02-15 05:28:14.721477 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-02-15 05:28:14.721490 | orchestrator | Sunday 15 February 2026 05:28:01 +0000 (0:00:01.731) 0:04:31.414 ******* 2026-02-15 05:28:14.721523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:14.721547 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:14.721568 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:28:14.721651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:14.721672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:14.721692 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:28:14.721711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:14.721731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:14.721750 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:28:14.721769 | orchestrator | 2026-02-15 05:28:14.721783 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-02-15 05:28:14.721796 | orchestrator | Sunday 15 February 2026 05:28:03 +0000 (0:00:02.003) 0:04:33.417 ******* 2026-02-15 05:28:14.721809 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:28:14.721821 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:28:14.721839 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:28:14.721859 | orchestrator | 2026-02-15 05:28:14.721880 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-02-15 05:28:14.721904 | orchestrator | Sunday 15 February 2026 05:28:05 +0000 (0:00:02.253) 0:04:35.671 ******* 2026-02-15 05:28:14.721917 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:28:14.721930 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:28:14.721943 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:28:14.721955 | orchestrator | 2026-02-15 05:28:14.721968 | orchestrator | TASK [include_role : manila] *************************************************** 2026-02-15 05:28:14.721980 | orchestrator | Sunday 15 February 2026 05:28:08 +0000 (0:00:03.025) 0:04:38.697 ******* 2026-02-15 05:28:14.721993 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:28:14.722006 | orchestrator | 2026-02-15 05:28:14.722090 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-02-15 05:28:14.722104 | orchestrator | Sunday 15 February 2026 05:28:10 +0000 (0:00:02.100) 0:04:40.797 ******* 2026-02-15 05:28:14.722126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:28:14.722141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:28:14.722166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 05:28:16.475521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 05:28:16.475619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:28:16.475661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:28:16.475669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:28:16.475676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 05:28:16.475695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:28:16.475702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 05:28:16.475713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 05:28:16.475722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 05:28:16.475729 | orchestrator | 2026-02-15 05:28:16.475736 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-02-15 05:28:16.475743 | orchestrator | Sunday 15 February 2026 05:28:15 +0000 (0:00:04.896) 0:04:45.694 ******* 2026-02-15 05:28:16.475751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:28:16.475762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:28:19.615053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 05:28:19.615146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 05:28:19.615155 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:28:19.615172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:28:19.615179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:28:19.615185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 05:28:19.615203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 05:28:19.615213 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:28:19.615218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-api:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:28:19.615224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-scheduler:20.0.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:28:19.615231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-share:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-02-15 05:28:19.615237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/2025.1/manila-data:20.0.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-02-15 05:28:19.615242 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:28:19.615247 | orchestrator | 2026-02-15 05:28:19.615252 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-02-15 05:28:19.615259 | orchestrator | Sunday 15 February 2026 05:28:17 +0000 (0:00:01.782) 0:04:47.477 ******* 2026-02-15 05:28:19.615265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:19.615273 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:19.615279 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:28:19.615284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:19.615298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:36.480633 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:28:36.480752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:36.480775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:28:36.480789 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:28:36.480801 | orchestrator | 2026-02-15 05:28:36.480814 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-02-15 05:28:36.480826 | orchestrator | Sunday 15 February 2026 05:28:19 +0000 (0:00:02.007) 0:04:49.484 ******* 2026-02-15 05:28:36.480837 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:28:36.480848 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:28:36.480858 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:28:36.480869 | orchestrator | 2026-02-15 05:28:36.480880 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-02-15 05:28:36.480891 | orchestrator | Sunday 15 February 2026 05:28:22 +0000 (0:00:03.351) 0:04:52.836 ******* 2026-02-15 05:28:36.480901 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:28:36.480912 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:28:36.480922 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:28:36.480933 | orchestrator | 2026-02-15 05:28:36.480944 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-02-15 05:28:36.480955 | orchestrator | Sunday 15 February 2026 05:28:26 +0000 (0:00:03.071) 0:04:55.907 ******* 2026-02-15 05:28:36.480966 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:28:36.480976 | orchestrator | 2026-02-15 05:28:36.480987 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-02-15 05:28:36.480998 | orchestrator | Sunday 15 February 2026 05:28:28 +0000 (0:00:02.585) 0:04:58.493 ******* 2026-02-15 05:28:36.481008 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:28:36.481019 | orchestrator | 2026-02-15 05:28:36.481030 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-02-15 05:28:36.481041 | orchestrator | Sunday 15 February 2026 05:28:32 +0000 (0:00:04.038) 0:05:02.531 ******* 2026-02-15 05:28:36.481072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:28:36.481128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 05:28:36.481143 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:28:36.481163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:28:36.481178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 05:28:36.481190 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:28:36.481221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:28:40.102386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 05:28:40.102488 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:28:40.102505 | orchestrator | 2026-02-15 05:28:40.102516 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-02-15 05:28:40.102528 | orchestrator | Sunday 15 February 2026 05:28:36 +0000 (0:00:03.818) 0:05:06.349 ******* 2026-02-15 05:28:40.102543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:28:40.102735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 05:28:40.102758 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:28:40.102793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:28:40.102810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 05:28:40.102822 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:28:40.102835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:28:40.102861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-clustercheck:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-02-15 05:28:56.951414 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:28:56.951548 | orchestrator | 2026-02-15 05:28:56.951643 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-02-15 05:28:56.951663 | orchestrator | Sunday 15 February 2026 05:28:40 +0000 (0:00:03.620) 0:05:09.970 ******* 2026-02-15 05:28:56.951677 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 05:28:56.951711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 05:28:56.951724 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:28:56.951736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 05:28:56.951773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 05:28:56.951785 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:28:56.951797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 05:28:56.951808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-02-15 05:28:56.951819 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:28:56.951830 | orchestrator | 2026-02-15 05:28:56.951841 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-02-15 05:28:56.951852 | orchestrator | Sunday 15 February 2026 05:28:44 +0000 (0:00:04.085) 0:05:14.056 ******* 2026-02-15 05:28:56.951871 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:28:56.951913 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:28:56.951933 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:28:56.951951 | orchestrator | 2026-02-15 05:28:56.951970 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-02-15 05:28:56.951988 | orchestrator | Sunday 15 February 2026 05:28:47 +0000 (0:00:03.013) 0:05:17.069 ******* 2026-02-15 05:28:56.952007 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:28:56.952020 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:28:56.952031 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:28:56.952041 | orchestrator | 2026-02-15 05:28:56.952052 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-02-15 05:28:56.952063 | orchestrator | Sunday 15 February 2026 05:28:49 +0000 (0:00:02.630) 0:05:19.700 ******* 2026-02-15 05:28:56.952074 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:28:56.952085 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:28:56.952095 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:28:56.952106 | orchestrator | 2026-02-15 05:28:56.952117 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-02-15 05:28:56.952128 | orchestrator | Sunday 15 February 2026 05:28:51 +0000 (0:00:01.434) 0:05:21.135 ******* 2026-02-15 05:28:56.952148 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:28:56.952159 | orchestrator | 2026-02-15 05:28:56.952170 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-02-15 05:28:56.952181 | orchestrator | Sunday 15 February 2026 05:28:53 +0000 (0:00:02.309) 0:05:23.444 ******* 2026-02-15 05:28:56.952199 | orchestrator | ok: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-15 05:28:56.952213 | orchestrator | ok: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-15 05:28:56.952225 | orchestrator | ok: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-15 05:28:56.952241 | orchestrator | 2026-02-15 05:28:56.952260 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-02-15 05:28:56.952278 | orchestrator | Sunday 15 February 2026 05:28:56 +0000 (0:00:03.254) 0:05:26.699 ******* 2026-02-15 05:28:56.952310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-15 05:29:12.182968 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:29:12.183097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-15 05:29:12.183135 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:29:12.183159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-15 05:29:12.183168 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:29:12.183175 | orchestrator | 2026-02-15 05:29:12.183184 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-02-15 05:29:12.183193 | orchestrator | Sunday 15 February 2026 05:28:58 +0000 (0:00:01.756) 0:05:28.456 ******* 2026-02-15 05:29:12.183202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-15 05:29:12.183212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-15 05:29:12.183219 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:29:12.183227 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:29:12.183234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-02-15 05:29:12.183242 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:29:12.183249 | orchestrator | 2026-02-15 05:29:12.183257 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-02-15 05:29:12.183264 | orchestrator | Sunday 15 February 2026 05:28:59 +0000 (0:00:01.419) 0:05:29.876 ******* 2026-02-15 05:29:12.183271 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:29:12.183278 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:29:12.183285 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:29:12.183292 | orchestrator | 2026-02-15 05:29:12.183299 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-02-15 05:29:12.183307 | orchestrator | Sunday 15 February 2026 05:29:01 +0000 (0:00:01.487) 0:05:31.363 ******* 2026-02-15 05:29:12.183314 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:29:12.183321 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:29:12.183328 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:29:12.183335 | orchestrator | 2026-02-15 05:29:12.183342 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-02-15 05:29:12.183350 | orchestrator | Sunday 15 February 2026 05:29:03 +0000 (0:00:02.306) 0:05:33.669 ******* 2026-02-15 05:29:12.183363 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:29:12.183370 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:29:12.183377 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:29:12.183384 | orchestrator | 2026-02-15 05:29:12.183391 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-02-15 05:29:12.183399 | orchestrator | Sunday 15 February 2026 05:29:05 +0000 (0:00:01.713) 0:05:35.383 ******* 2026-02-15 05:29:12.183406 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:29:12.183413 | orchestrator | 2026-02-15 05:29:12.183420 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-02-15 05:29:12.183428 | orchestrator | Sunday 15 February 2026 05:29:07 +0000 (0:00:02.049) 0:05:37.433 ******* 2026-02-15 05:29:12.183454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:29:12.183470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:12.183481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-15 05:29:12.183491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-15 05:29:12.183515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:12.398784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:12.398929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:12.398947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 05:29:12.398961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:12.398976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:12.399014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-15 05:29:12.399071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:12.399091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:12.399107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 05:29:12.399121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:29:12.399143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:12.399164 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:12.611656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-15 05:29:12.611796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-15 05:29:12.611840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:12.611857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:12.611871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:12.611907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 05:29:12.611927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:12.611940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:12.611961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-15 05:29:12.611973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:12.611984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:12.612008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 05:29:12.823472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:12.823648 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:29:12.823707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:12.823723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-15 05:29:12.823777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-15 05:29:12.823792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:12.823813 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:12.823826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:12.823838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 05:29:12.823850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:12.823875 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:15.398110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-15 05:29:15.398239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:15.398278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:15.398291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 05:29:15.398302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:15.398310 | orchestrator | 2026-02-15 05:29:15.398319 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-02-15 05:29:15.398328 | orchestrator | Sunday 15 February 2026 05:29:13 +0000 (0:00:06.395) 0:05:43.828 ******* 2026-02-15 05:29:15.398371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:29:15.398387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:15.398397 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-15 05:29:15.398405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-15 05:29:15.398424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:15.498758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:15.498906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:15.498925 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:29:15.498938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 05:29:15.498950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:15.498995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:15.499015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-15 05:29:15.499025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-15 05:29:15.499035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:15.499044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:15.499064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:15.633882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-15 05:29:15.634013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:15.634081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:15.634093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 05:29:15.634105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:15.634117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:15.634196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 05:29:15.634212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:15.634223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:15.634233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-15 05:29:15.634244 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:29:15.634256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:15.634267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:15.634293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-server:26.0.3.20251208', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:29:16.952290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 05:29:16.952425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-openvswitch-agent:26.0.3.20251208', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:16.952444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:16.952463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}, 'pid_mode': '', 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-dhcp-agent:26.0.3.20251208', 'KOLLA_NAME': 'neutron_dhcp_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}}})  2026-02-15 05:29:16.952499 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:29:16.952534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'cgroupns_mode': 'private', 'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_IMAGE': 'registry.osism.tech/kolla/release/2025.1/neutron-l3-agent:26.0.3.20251208', 'KOLLA_LEGACY_IPTABLES': 'false', 'KOLLA_NAME': 'neutron_l3_agent', 'KOLLA_NEUTRON_WRAPPERS': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}, 'pid_mode': ''}})  2026-02-15 05:29:16.952547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-sriov-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:16.952599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-mlnx-agent:26.0.3.20251208', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:16.952612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-eswitchd:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:16.952624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': 'NONE', 'timeout': '30'}}})  2026-02-15 05:29:16.952650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metadata-agent:26.0.3.20251208', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:16.952671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-bgp-dragent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:32.260477 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-infoblox-ipam-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}}})  2026-02-15 05:29:32.261469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-metering-agent:26.0.3.20251208', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}}})  2026-02-15 05:29:32.261509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/2025.1/ironic-neutron-agent:26.0.3.20251208', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-02-15 05:29:32.261538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/neutron-tls-proxy:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-02-15 05:29:32.261590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/2025.1/neutron-ovn-agent:26.0.3.20251208', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-02-15 05:29:32.261599 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:29:32.261608 | orchestrator | 2026-02-15 05:29:32.261617 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-02-15 05:29:32.261626 | orchestrator | Sunday 15 February 2026 05:29:16 +0000 (0:00:02.995) 0:05:46.824 ******* 2026-02-15 05:29:32.261634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:29:32.261662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:29:32.261671 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:29:32.261677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:29:32.261684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:29:32.261690 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:29:32.261697 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:29:32.261704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:29:32.261712 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:29:32.261718 | orchestrator | 2026-02-15 05:29:32.261725 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-02-15 05:29:32.261731 | orchestrator | Sunday 15 February 2026 05:29:19 +0000 (0:00:02.899) 0:05:49.723 ******* 2026-02-15 05:29:32.261738 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:29:32.261745 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:29:32.261753 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:29:32.261769 | orchestrator | 2026-02-15 05:29:32.261776 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-02-15 05:29:32.261783 | orchestrator | Sunday 15 February 2026 05:29:22 +0000 (0:00:02.304) 0:05:52.028 ******* 2026-02-15 05:29:32.261790 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:29:32.261796 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:29:32.261803 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:29:32.261810 | orchestrator | 2026-02-15 05:29:32.261817 | orchestrator | TASK [include_role : placement] ************************************************ 2026-02-15 05:29:32.261824 | orchestrator | Sunday 15 February 2026 05:29:25 +0000 (0:00:02.957) 0:05:54.985 ******* 2026-02-15 05:29:32.261830 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:29:32.261837 | orchestrator | 2026-02-15 05:29:32.261845 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-02-15 05:29:32.261852 | orchestrator | Sunday 15 February 2026 05:29:27 +0000 (0:00:02.345) 0:05:57.331 ******* 2026-02-15 05:29:32.261866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-15 05:29:32.261882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-15 05:29:49.658580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-15 05:29:49.658726 | orchestrator | 2026-02-15 05:29:49.658746 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-02-15 05:29:49.658759 | orchestrator | Sunday 15 February 2026 05:29:32 +0000 (0:00:04.797) 0:06:02.128 ******* 2026-02-15 05:29:49.658773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-15 05:29:49.658786 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:29:49.658814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-15 05:29:49.658826 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:29:49.658856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/2025.1/placement-api:13.0.0.20251208', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'wsgi': 'placement.wsgi.api:application', 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-15 05:29:49.658869 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:29:49.658880 | orchestrator | 2026-02-15 05:29:49.658891 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-02-15 05:29:49.658911 | orchestrator | Sunday 15 February 2026 05:29:33 +0000 (0:00:01.723) 0:06:03.851 ******* 2026-02-15 05:29:49.658925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:29:49.658938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:29:49.658950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:29:49.658962 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:29:49.658974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:29:49.658985 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:29:49.658995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:29:49.659007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:29:49.659018 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:29:49.659028 | orchestrator | 2026-02-15 05:29:49.659039 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-02-15 05:29:49.659050 | orchestrator | Sunday 15 February 2026 05:29:36 +0000 (0:00:02.221) 0:06:06.073 ******* 2026-02-15 05:29:49.659061 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:29:49.659073 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:29:49.659091 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:29:49.659103 | orchestrator | 2026-02-15 05:29:49.659115 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-02-15 05:29:49.659128 | orchestrator | Sunday 15 February 2026 05:29:38 +0000 (0:00:02.313) 0:06:08.386 ******* 2026-02-15 05:29:49.659141 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:29:49.659153 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:29:49.659166 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:29:49.659178 | orchestrator | 2026-02-15 05:29:49.659191 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-02-15 05:29:49.659203 | orchestrator | Sunday 15 February 2026 05:29:41 +0000 (0:00:02.988) 0:06:11.374 ******* 2026-02-15 05:29:49.659217 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:29:49.659229 | orchestrator | 2026-02-15 05:29:49.659241 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-02-15 05:29:49.659253 | orchestrator | Sunday 15 February 2026 05:29:43 +0000 (0:00:02.434) 0:06:13.809 ******* 2026-02-15 05:29:49.659275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:29:50.891131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:29:50.891238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:29:50.891273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:29:50.891288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:29:50.891343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:29:50.891357 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:29:50.891370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:29:50.891388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:29:50.891401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:29:50.891432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:29:51.584289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:29:51.584401 | orchestrator | 2026-02-15 05:29:51.584419 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-02-15 05:29:51.584432 | orchestrator | Sunday 15 February 2026 05:29:50 +0000 (0:00:06.959) 0:06:20.769 ******* 2026-02-15 05:29:51.584448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:29:51.584482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:29:51.584517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:29:51.584738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:29:51.584777 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:29:51.584791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:29:51.584815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:29:51.584830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:29:51.584855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:29:51.584868 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:29:51.584893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.osapi_compute:application', 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:30:11.023271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-metadata', 'value': {'container_name': 'nova_metadata', 'group': 'nova-metadata', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-api:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-metadata/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8775 '], 'timeout': '30'}, 'wsgi': 'nova.wsgi.metadata:application', 'haproxy': {'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:30:11.023431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-scheduler:31.2.1.20251208', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-02-15 05:30:11.023463 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/2025.1/nova-super-conductor:31.2.1.20251208', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-02-15 05:30:11.023511 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:30:11.023587 | orchestrator | 2026-02-15 05:30:11.023612 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-02-15 05:30:11.023630 | orchestrator | Sunday 15 February 2026 05:29:52 +0000 (0:00:01.790) 0:06:22.560 ******* 2026-02-15 05:30:11.023642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023692 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:30:11.023703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023774 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:30:11.023797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023825 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023887 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:30:11.023906 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:30:11.023923 | orchestrator | 2026-02-15 05:30:11.023939 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-02-15 05:30:11.023958 | orchestrator | Sunday 15 February 2026 05:29:55 +0000 (0:00:02.589) 0:06:25.149 ******* 2026-02-15 05:30:11.023976 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:30:11.023995 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:30:11.024013 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:30:11.024032 | orchestrator | 2026-02-15 05:30:11.024052 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-02-15 05:30:11.024065 | orchestrator | Sunday 15 February 2026 05:29:57 +0000 (0:00:02.325) 0:06:27.475 ******* 2026-02-15 05:30:11.024078 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:30:11.024090 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:30:11.024102 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:30:11.024115 | orchestrator | 2026-02-15 05:30:11.024128 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-02-15 05:30:11.024140 | orchestrator | Sunday 15 February 2026 05:30:00 +0000 (0:00:02.951) 0:06:30.426 ******* 2026-02-15 05:30:11.024153 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:30:11.024165 | orchestrator | 2026-02-15 05:30:11.024176 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-02-15 05:30:11.024187 | orchestrator | Sunday 15 February 2026 05:30:03 +0000 (0:00:03.058) 0:06:33.485 ******* 2026-02-15 05:30:11.024197 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-02-15 05:30:11.024209 | orchestrator | 2026-02-15 05:30:11.024220 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-02-15 05:30:11.024231 | orchestrator | Sunday 15 February 2026 05:30:05 +0000 (0:00:01.709) 0:06:35.195 ******* 2026-02-15 05:30:11.024243 | orchestrator | ok: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-15 05:30:11.024258 | orchestrator | ok: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-15 05:30:11.024281 | orchestrator | ok: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-02-15 05:30:30.626882 | orchestrator | 2026-02-15 05:30:30.627034 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-02-15 05:30:30.627098 | orchestrator | Sunday 15 February 2026 05:30:11 +0000 (0:00:05.691) 0:06:40.887 ******* 2026-02-15 05:30:30.627123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 05:30:30.627147 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:30:30.627189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 05:30:30.627209 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:30:30.627229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 05:30:30.627247 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:30:30.627266 | orchestrator | 2026-02-15 05:30:30.627285 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-02-15 05:30:30.627303 | orchestrator | Sunday 15 February 2026 05:30:13 +0000 (0:00:02.517) 0:06:43.405 ******* 2026-02-15 05:30:30.627323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 05:30:30.627345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 05:30:30.627365 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:30:30.627383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 05:30:30.627401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 05:30:30.627420 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:30:30.627439 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 05:30:30.627457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-02-15 05:30:30.627476 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:30:30.627507 | orchestrator | 2026-02-15 05:30:30.627556 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-15 05:30:30.627577 | orchestrator | Sunday 15 February 2026 05:30:16 +0000 (0:00:02.495) 0:06:45.900 ******* 2026-02-15 05:30:30.627596 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:30:30.627615 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:30:30.627633 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:30:30.627652 | orchestrator | 2026-02-15 05:30:30.627669 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-15 05:30:30.627688 | orchestrator | Sunday 15 February 2026 05:30:19 +0000 (0:00:03.760) 0:06:49.661 ******* 2026-02-15 05:30:30.627706 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:30:30.627724 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:30:30.627768 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:30:30.627788 | orchestrator | 2026-02-15 05:30:30.627806 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-02-15 05:30:30.627824 | orchestrator | Sunday 15 February 2026 05:30:23 +0000 (0:00:03.981) 0:06:53.642 ******* 2026-02-15 05:30:30.627844 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-02-15 05:30:30.627864 | orchestrator | 2026-02-15 05:30:30.627883 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-02-15 05:30:30.627901 | orchestrator | Sunday 15 February 2026 05:30:25 +0000 (0:00:01.762) 0:06:55.404 ******* 2026-02-15 05:30:30.627922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 05:30:30.627952 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:30:30.627972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 05:30:30.627991 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:30:30.628010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 05:30:30.628029 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:30:30.628047 | orchestrator | 2026-02-15 05:30:30.628064 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-02-15 05:30:30.628081 | orchestrator | Sunday 15 February 2026 05:30:28 +0000 (0:00:02.497) 0:06:57.902 ******* 2026-02-15 05:30:30.628100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 05:30:30.628132 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:30:30.628151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 05:30:30.628169 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:30:30.628199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-02-15 05:31:05.536382 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:05.536562 | orchestrator | 2026-02-15 05:31:05.536583 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-02-15 05:31:05.536596 | orchestrator | Sunday 15 February 2026 05:30:30 +0000 (0:00:02.591) 0:07:00.493 ******* 2026-02-15 05:31:05.536609 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:05.536620 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:05.536630 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:05.536641 | orchestrator | 2026-02-15 05:31:05.536652 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-15 05:31:05.536663 | orchestrator | Sunday 15 February 2026 05:30:33 +0000 (0:00:02.408) 0:07:02.902 ******* 2026-02-15 05:31:05.536674 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:31:05.536685 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:31:05.536696 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:31:05.536706 | orchestrator | 2026-02-15 05:31:05.536717 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-15 05:31:05.536728 | orchestrator | Sunday 15 February 2026 05:30:36 +0000 (0:00:03.735) 0:07:06.637 ******* 2026-02-15 05:31:05.536739 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:31:05.536749 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:31:05.536760 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:31:05.536770 | orchestrator | 2026-02-15 05:31:05.536781 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-02-15 05:31:05.536792 | orchestrator | Sunday 15 February 2026 05:30:40 +0000 (0:00:04.243) 0:07:10.881 ******* 2026-02-15 05:31:05.536818 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-02-15 05:31:05.536831 | orchestrator | 2026-02-15 05:31:05.536842 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-02-15 05:31:05.536853 | orchestrator | Sunday 15 February 2026 05:30:43 +0000 (0:00:02.624) 0:07:13.506 ******* 2026-02-15 05:31:05.536866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 05:31:05.536903 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:05.536916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 05:31:05.536931 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:05.536944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 05:31:05.536957 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:05.536970 | orchestrator | 2026-02-15 05:31:05.536983 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-02-15 05:31:05.536996 | orchestrator | Sunday 15 February 2026 05:30:46 +0000 (0:00:02.649) 0:07:16.155 ******* 2026-02-15 05:31:05.537009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 05:31:05.537022 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:05.537052 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 05:31:05.537067 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:05.537080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-02-15 05:31:05.537093 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:05.537105 | orchestrator | 2026-02-15 05:31:05.537117 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-02-15 05:31:05.537130 | orchestrator | Sunday 15 February 2026 05:30:48 +0000 (0:00:02.474) 0:07:18.630 ******* 2026-02-15 05:31:05.537143 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:05.537161 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:05.537178 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:05.537192 | orchestrator | 2026-02-15 05:31:05.537204 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-02-15 05:31:05.537217 | orchestrator | Sunday 15 February 2026 05:30:51 +0000 (0:00:02.483) 0:07:21.114 ******* 2026-02-15 05:31:05.537239 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:31:05.537251 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:31:05.537264 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:31:05.537276 | orchestrator | 2026-02-15 05:31:05.537287 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-02-15 05:31:05.537298 | orchestrator | Sunday 15 February 2026 05:30:54 +0000 (0:00:03.475) 0:07:24.589 ******* 2026-02-15 05:31:05.537309 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:31:05.537319 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:31:05.537330 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:31:05.537341 | orchestrator | 2026-02-15 05:31:05.537351 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-02-15 05:31:05.537362 | orchestrator | Sunday 15 February 2026 05:30:59 +0000 (0:00:04.525) 0:07:29.115 ******* 2026-02-15 05:31:05.537373 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:31:05.537384 | orchestrator | 2026-02-15 05:31:05.537395 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-02-15 05:31:05.537406 | orchestrator | Sunday 15 February 2026 05:31:01 +0000 (0:00:02.480) 0:07:31.595 ******* 2026-02-15 05:31:05.537418 | orchestrator | ok: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 05:31:05.537436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 05:31:05.537464 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 05:31:06.666474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 05:31:06.666658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:31:06.666677 | orchestrator | ok: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 05:31:06.666691 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 05:31:06.666703 | orchestrator | ok: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-02-15 05:31:06.666732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 05:31:06.666746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 05:31:06.666768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 05:31:06.666780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 05:31:06.666791 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:31:06.666804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 05:31:06.666815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:31:06.666827 | orchestrator | 2026-02-15 05:31:06.666847 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-02-15 05:31:07.650855 | orchestrator | Sunday 15 February 2026 05:31:06 +0000 (0:00:04.945) 0:07:36.540 ******* 2026-02-15 05:31:07.651011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 05:31:07.651036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 05:31:07.651049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 05:31:07.651060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 05:31:07.651072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 05:31:07.651101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 05:31:07.651121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:31:07.651131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 05:31:07.651142 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:07.651153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 05:31:07.651163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:31:07.651173 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:07.651252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-api:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-02-15 05:31:07.651286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-driver-agent:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-02-15 05:31:26.094758 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-health-manager:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-02-15 05:31:26.094879 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-housekeeping:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-02-15 05:31:26.094899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/octavia-worker:16.0.1.20251208', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/dev/shm:/dev/shm', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-02-15 05:31:26.094912 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:26.094925 | orchestrator | 2026-02-15 05:31:26.094938 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-02-15 05:31:26.094950 | orchestrator | Sunday 15 February 2026 05:31:08 +0000 (0:00:02.152) 0:07:38.693 ******* 2026-02-15 05:31:26.094962 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 05:31:26.094976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 05:31:26.094989 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:26.095000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 05:31:26.095011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 05:31:26.095046 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:26.095058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 05:31:26.095069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-02-15 05:31:26.095080 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:26.095090 | orchestrator | 2026-02-15 05:31:26.095102 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-02-15 05:31:26.095112 | orchestrator | Sunday 15 February 2026 05:31:11 +0000 (0:00:02.205) 0:07:40.899 ******* 2026-02-15 05:31:26.095123 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:31:26.095134 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:31:26.095145 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:31:26.095156 | orchestrator | 2026-02-15 05:31:26.095166 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-02-15 05:31:26.095177 | orchestrator | Sunday 15 February 2026 05:31:13 +0000 (0:00:02.278) 0:07:43.178 ******* 2026-02-15 05:31:26.095188 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:31:26.095198 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:31:26.095226 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:31:26.095238 | orchestrator | 2026-02-15 05:31:26.095249 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-02-15 05:31:26.095259 | orchestrator | Sunday 15 February 2026 05:31:16 +0000 (0:00:03.025) 0:07:46.203 ******* 2026-02-15 05:31:26.095273 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:31:26.095286 | orchestrator | 2026-02-15 05:31:26.095299 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-02-15 05:31:26.095311 | orchestrator | Sunday 15 February 2026 05:31:19 +0000 (0:00:02.687) 0:07:48.891 ******* 2026-02-15 05:31:26.095332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:31:26.095351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:31:26.095373 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:31:26.095396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:31:28.359642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:31:28.359734 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:31:28.359762 | orchestrator | 2026-02-15 05:31:28.359773 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-02-15 05:31:28.359782 | orchestrator | Sunday 15 February 2026 05:31:26 +0000 (0:00:07.071) 0:07:55.963 ******* 2026-02-15 05:31:28.359791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:31:28.359820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:31:28.359829 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:28.359839 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:31:28.359847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:31:28.359861 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:28.359869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:31:28.359888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:31:39.212635 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:39.212783 | orchestrator | 2026-02-15 05:31:39.212807 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-02-15 05:31:39.212825 | orchestrator | Sunday 15 February 2026 05:31:28 +0000 (0:00:02.267) 0:07:58.230 ******* 2026-02-15 05:31:39.212845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:31:39.212866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-15 05:31:39.212909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-15 05:31:39.212923 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:39.212933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:31:39.212943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-15 05:31:39.212953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-15 05:31:39.212963 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:39.212972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:31:39.212983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-15 05:31:39.212993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}})  2026-02-15 05:31:39.213003 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:39.213013 | orchestrator | 2026-02-15 05:31:39.213023 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-02-15 05:31:39.213032 | orchestrator | Sunday 15 February 2026 05:31:30 +0000 (0:00:01.818) 0:08:00.048 ******* 2026-02-15 05:31:39.213042 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:39.213052 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:39.213061 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:39.213071 | orchestrator | 2026-02-15 05:31:39.213080 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-02-15 05:31:39.213090 | orchestrator | Sunday 15 February 2026 05:31:31 +0000 (0:00:01.479) 0:08:01.528 ******* 2026-02-15 05:31:39.213099 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:39.213109 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:39.213118 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:39.213127 | orchestrator | 2026-02-15 05:31:39.213139 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-02-15 05:31:39.213165 | orchestrator | Sunday 15 February 2026 05:31:33 +0000 (0:00:02.293) 0:08:03.821 ******* 2026-02-15 05:31:39.213177 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:31:39.213188 | orchestrator | 2026-02-15 05:31:39.213199 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-02-15 05:31:39.213210 | orchestrator | Sunday 15 February 2026 05:31:36 +0000 (0:00:02.589) 0:08:06.410 ******* 2026-02-15 05:31:39.213244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-15 05:31:39.213268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 05:31:39.213281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:39.213294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:39.213306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 05:31:39.213330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-15 05:31:41.003620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 05:31:41.003721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:41.003738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:41.003752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 05:31:41.003767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}}) 2026-02-15 05:31:41.003797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 05:31:41.003850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:41.003864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:41.003876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 05:31:41.003888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:31:41.003901 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-15 05:31:41.003927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:41.003946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:43.608546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:31:43.608655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 05:31:43.608673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-15 05:31:43.608687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:43.608739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:43.608753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 05:31:43.608784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:31:43.608797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-15 05:31:43.608809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:43.608820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:43.608847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 05:31:43.608860 | orchestrator | 2026-02-15 05:31:43.608874 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-02-15 05:31:43.608886 | orchestrator | Sunday 15 February 2026 05:31:42 +0000 (0:00:05.975) 0:08:12.386 ******* 2026-02-15 05:31:43.608921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-15 05:31:43.751157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 05:31:43.751248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:43.751262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:43.751274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 05:31:43.751322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:31:43.751359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-15 05:31:43.751377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:43.751392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:43.751408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 05:31:43.751434 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:43.751451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-15 05:31:43.751462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 05:31:43.751471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:43.751489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:45.007133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 05:31:45.007245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:31:45.007303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-15 05:31:45.007319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-server:3.2.1.20251208', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_server:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}}}})  2026-02-15 05:31:45.007349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:45.007362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-node-exporter:1.8.2.20251208', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-02-15 05:31:45.007374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:45.007394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-mysqld-exporter:0.16.0.20251208', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:45.007412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 05:31:45.007423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-memcached-exporter:0.15.0.20251208', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:45.007435 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:45.007449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-cadvisor:0.49.2.20251208', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-02-15 05:31:45.007470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-alertmanager:0.28.1.20251208', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:31:57.850860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-openstack-exporter:1.7.0.20251208', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['option httpchk', 'timeout server 45s']}}}})  2026-02-15 05:31:57.850983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-elasticsearch-exporter:1.8.0.20251208', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:57.851013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-blackbox-exporter:0.25.0.20251208', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:31:57.851030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/prometheus-libvirt-exporter:2.2.0.20251208', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-02-15 05:31:57.851045 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:57.851061 | orchestrator | 2026-02-15 05:31:57.851076 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-02-15 05:31:57.851092 | orchestrator | Sunday 15 February 2026 05:31:45 +0000 (0:00:02.498) 0:08:14.885 ******* 2026-02-15 05:31:57.851109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-15 05:31:57.851126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-15 05:31:57.851142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:31:57.851174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:31:57.851202 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:57.851217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-15 05:31:57.851232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-15 05:31:57.851247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:31:57.851262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-15 05:31:57.851283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True, 'backend_http_extra': ['option httpchk GET /-/ready HTTP/1.0', "http-check send hdr Authorization 'Basic aGFwcm94eTptdWVNaWV4aWUzYW5nb28wZnVjaGFod2VlUXVhaEpvbw=='"]}})  2026-02-15 05:31:57.851297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:31:57.851311 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:57.851326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:31:57.851339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True, 'backend_http_extra': ['option httpchk']}})  2026-02-15 05:31:57.851353 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:57.851367 | orchestrator | 2026-02-15 05:31:57.851382 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-02-15 05:31:57.851396 | orchestrator | Sunday 15 February 2026 05:31:47 +0000 (0:00:02.002) 0:08:16.887 ******* 2026-02-15 05:31:57.851410 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:57.851427 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:57.851443 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:57.851456 | orchestrator | 2026-02-15 05:31:57.851466 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-02-15 05:31:57.851475 | orchestrator | Sunday 15 February 2026 05:31:49 +0000 (0:00:02.034) 0:08:18.922 ******* 2026-02-15 05:31:57.851525 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:31:57.851534 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:31:57.851542 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:31:57.851550 | orchestrator | 2026-02-15 05:31:57.851558 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-02-15 05:31:57.851566 | orchestrator | Sunday 15 February 2026 05:31:51 +0000 (0:00:02.312) 0:08:21.235 ******* 2026-02-15 05:31:57.851574 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:31:57.851581 | orchestrator | 2026-02-15 05:31:57.851589 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-02-15 05:31:57.851597 | orchestrator | Sunday 15 February 2026 05:31:53 +0000 (0:00:02.371) 0:08:23.606 ******* 2026-02-15 05:31:57.851616 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:32:15.829218 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:32:15.829369 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:32:15.829389 | orchestrator | 2026-02-15 05:32:15.829403 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-02-15 05:32:15.829415 | orchestrator | Sunday 15 February 2026 05:31:57 +0000 (0:00:04.111) 0:08:27.717 ******* 2026-02-15 05:32:15.829449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:32:15.829462 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:15.829590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:32:15.829618 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:15.829649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:32:15.829671 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:15.829690 | orchestrator | 2026-02-15 05:32:15.829705 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-02-15 05:32:15.829717 | orchestrator | Sunday 15 February 2026 05:31:59 +0000 (0:00:01.554) 0:08:29.272 ******* 2026-02-15 05:32:15.829731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-15 05:32:15.829745 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:15.829757 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-15 05:32:15.829770 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:15.829793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-02-15 05:32:15.829805 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:15.829818 | orchestrator | 2026-02-15 05:32:15.829831 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-02-15 05:32:15.829843 | orchestrator | Sunday 15 February 2026 05:32:00 +0000 (0:00:01.470) 0:08:30.742 ******* 2026-02-15 05:32:15.829856 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:15.829868 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:15.829880 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:15.829893 | orchestrator | 2026-02-15 05:32:15.829905 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-02-15 05:32:15.829917 | orchestrator | Sunday 15 February 2026 05:32:02 +0000 (0:00:01.882) 0:08:32.624 ******* 2026-02-15 05:32:15.829929 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:15.829942 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:15.829955 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:15.829967 | orchestrator | 2026-02-15 05:32:15.829979 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-02-15 05:32:15.829991 | orchestrator | Sunday 15 February 2026 05:32:05 +0000 (0:00:02.313) 0:08:34.937 ******* 2026-02-15 05:32:15.830003 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:32:15.830015 | orchestrator | 2026-02-15 05:32:15.830091 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-02-15 05:32:15.830103 | orchestrator | Sunday 15 February 2026 05:32:07 +0000 (0:00:02.417) 0:08:37.355 ******* 2026-02-15 05:32:15.830122 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-15 05:32:15.830177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-15 05:32:17.666386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}}) 2026-02-15 05:32:17.666563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-15 05:32:17.666583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-15 05:32:17.666632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}}) 2026-02-15 05:32:17.666655 | orchestrator | 2026-02-15 05:32:17.666669 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-02-15 05:32:17.666682 | orchestrator | Sunday 15 February 2026 05:32:15 +0000 (0:00:08.344) 0:08:45.699 ******* 2026-02-15 05:32:17.666694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-15 05:32:17.666707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-15 05:32:17.666719 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:17.666731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-15 05:32:17.666758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-15 05:32:39.298818 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:39.298964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-apiserver:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}}}})  2026-02-15 05:32:39.298998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/skyline-console:6.0.1.20251208', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}}}})  2026-02-15 05:32:39.299016 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:39.299028 | orchestrator | 2026-02-15 05:32:39.299040 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-02-15 05:32:39.299053 | orchestrator | Sunday 15 February 2026 05:32:17 +0000 (0:00:01.840) 0:08:47.539 ******* 2026-02-15 05:32:39.299066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-15 05:32:39.299081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-15 05:32:39.299093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:32:39.299106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:32:39.299137 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:39.299148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-15 05:32:39.299160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-15 05:32:39.299189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:32:39.299201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:32:39.299212 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:39.299223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-15 05:32:39.299234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /docs']}})  2026-02-15 05:32:39.299245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:32:39.299318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no', 'backend_http_extra': ['option httpchk GET /']}})  2026-02-15 05:32:39.299337 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:39.299348 | orchestrator | 2026-02-15 05:32:39.299359 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-02-15 05:32:39.299370 | orchestrator | Sunday 15 February 2026 05:32:19 +0000 (0:00:02.170) 0:08:49.709 ******* 2026-02-15 05:32:39.299381 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:32:39.299412 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:32:39.299423 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:32:39.299445 | orchestrator | 2026-02-15 05:32:39.299456 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-02-15 05:32:39.299467 | orchestrator | Sunday 15 February 2026 05:32:22 +0000 (0:00:02.219) 0:08:51.929 ******* 2026-02-15 05:32:39.299520 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:32:39.299534 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:32:39.299545 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:32:39.299556 | orchestrator | 2026-02-15 05:32:39.299567 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-02-15 05:32:39.299577 | orchestrator | Sunday 15 February 2026 05:32:25 +0000 (0:00:02.962) 0:08:54.892 ******* 2026-02-15 05:32:39.299588 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:39.299608 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:39.299619 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:39.299630 | orchestrator | 2026-02-15 05:32:39.299641 | orchestrator | TASK [include_role : trove] **************************************************** 2026-02-15 05:32:39.299652 | orchestrator | Sunday 15 February 2026 05:32:26 +0000 (0:00:01.462) 0:08:56.355 ******* 2026-02-15 05:32:39.299663 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:39.299673 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:39.299684 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:39.299694 | orchestrator | 2026-02-15 05:32:39.299705 | orchestrator | TASK [include_role : venus] **************************************************** 2026-02-15 05:32:39.299716 | orchestrator | Sunday 15 February 2026 05:32:27 +0000 (0:00:01.435) 0:08:57.790 ******* 2026-02-15 05:32:39.299726 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:39.299737 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:39.299748 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:39.299758 | orchestrator | 2026-02-15 05:32:39.299769 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-02-15 05:32:39.299779 | orchestrator | Sunday 15 February 2026 05:32:29 +0000 (0:00:01.755) 0:08:59.545 ******* 2026-02-15 05:32:39.299790 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:39.299800 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:39.299811 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:39.299821 | orchestrator | 2026-02-15 05:32:39.299832 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-02-15 05:32:39.299843 | orchestrator | Sunday 15 February 2026 05:32:31 +0000 (0:00:01.373) 0:09:00.920 ******* 2026-02-15 05:32:39.299853 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:39.299864 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:32:39.299874 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:32:39.299885 | orchestrator | 2026-02-15 05:32:39.299900 | orchestrator | TASK [include_role : loadbalancer] ********************************************* 2026-02-15 05:32:39.299912 | orchestrator | Sunday 15 February 2026 05:32:32 +0000 (0:00:01.329) 0:09:02.249 ******* 2026-02-15 05:32:39.299922 | orchestrator | included: loadbalancer for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:32:39.299933 | orchestrator | 2026-02-15 05:32:39.299944 | orchestrator | TASK [service-check-containers : loadbalancer | Check containers] ************** 2026-02-15 05:32:39.299954 | orchestrator | Sunday 15 February 2026 05:32:35 +0000 (0:00:02.811) 0:09:05.061 ******* 2026-02-15 05:32:39.299977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-02-15 05:32:43.845148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-02-15 05:32:43.845254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-02-15 05:32:43.845293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:32:43.845306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:32:43.845332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-02-15 05:32:43.845345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:32:43.845376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:32:43.845389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-02-15 05:32:43.845409 | orchestrator | 2026-02-15 05:32:43.845422 | orchestrator | TASK [service-check-containers : loadbalancer | Notify handlers to restart containers] *** 2026-02-15 05:32:43.845434 | orchestrator | Sunday 15 February 2026 05:32:39 +0000 (0:00:04.106) 0:09:09.167 ******* 2026-02-15 05:32:43.845445 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:32:43.845457 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:32:43.845469 | orchestrator | } 2026-02-15 05:32:43.845546 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:32:43.845559 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:32:43.845570 | orchestrator | } 2026-02-15 05:32:43.845581 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:32:43.845591 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:32:43.845602 | orchestrator | } 2026-02-15 05:32:43.845613 | orchestrator | 2026-02-15 05:32:43.845624 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:32:43.845635 | orchestrator | Sunday 15 February 2026 05:32:40 +0000 (0:00:01.463) 0:09:10.631 ******* 2026-02-15 05:32:43.845646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-02-15 05:32:43.845658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:32:43.845675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:32:43.845689 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:32:43.845702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-02-15 05:32:43.845724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:34:44.267638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:34:44.267753 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:34:44.267769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/haproxy:2.8.15.20251208', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-02-15 05:34:44.267783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/proxysql:3.0.3.20251208', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-02-15 05:34:44.267794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/keepalived:2.2.8.20251208', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-02-15 05:34:44.267804 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:34:44.267814 | orchestrator | 2026-02-15 05:34:44.267825 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-02-15 05:34:44.267851 | orchestrator | Sunday 15 February 2026 05:32:43 +0000 (0:00:03.082) 0:09:13.714 ******* 2026-02-15 05:34:44.267861 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:34:44.267872 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:34:44.267882 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:34:44.267891 | orchestrator | 2026-02-15 05:34:44.267902 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-02-15 05:34:44.267912 | orchestrator | Sunday 15 February 2026 05:32:45 +0000 (0:00:01.858) 0:09:15.572 ******* 2026-02-15 05:34:44.267922 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:34:44.267931 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:34:44.267941 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:34:44.267951 | orchestrator | 2026-02-15 05:34:44.267961 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-02-15 05:34:44.267992 | orchestrator | Sunday 15 February 2026 05:32:47 +0000 (0:00:01.511) 0:09:17.083 ******* 2026-02-15 05:34:44.268002 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:34:44.268013 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:34:44.268022 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:34:44.268032 | orchestrator | 2026-02-15 05:34:44.268042 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-02-15 05:34:44.268051 | orchestrator | Sunday 15 February 2026 05:32:54 +0000 (0:00:07.122) 0:09:24.206 ******* 2026-02-15 05:34:44.268061 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:34:44.268071 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:34:44.268080 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:34:44.268090 | orchestrator | 2026-02-15 05:34:44.268100 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-02-15 05:34:44.268109 | orchestrator | Sunday 15 February 2026 05:33:01 +0000 (0:00:07.470) 0:09:31.676 ******* 2026-02-15 05:34:44.268119 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:34:44.268129 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:34:44.268140 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:34:44.268151 | orchestrator | 2026-02-15 05:34:44.268162 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-02-15 05:34:44.268174 | orchestrator | Sunday 15 February 2026 05:33:08 +0000 (0:00:07.078) 0:09:38.755 ******* 2026-02-15 05:34:44.268185 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:34:44.268197 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:34:44.268208 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:34:44.268219 | orchestrator | 2026-02-15 05:34:44.268245 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-02-15 05:34:44.268257 | orchestrator | Sunday 15 February 2026 05:33:16 +0000 (0:00:07.828) 0:09:46.584 ******* 2026-02-15 05:34:44.268268 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:34:44.268279 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:34:44.268289 | orchestrator | 2026-02-15 05:34:44.268300 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-02-15 05:34:44.268311 | orchestrator | Sunday 15 February 2026 05:33:20 +0000 (0:00:03.704) 0:09:50.289 ******* 2026-02-15 05:34:44.268323 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:34:44.268334 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:34:44.268344 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:34:44.268355 | orchestrator | 2026-02-15 05:34:44.268367 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-02-15 05:34:44.268378 | orchestrator | Sunday 15 February 2026 05:33:33 +0000 (0:00:13.362) 0:10:03.651 ******* 2026-02-15 05:34:44.268390 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:34:44.268401 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:34:44.268411 | orchestrator | 2026-02-15 05:34:44.268423 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-02-15 05:34:44.268434 | orchestrator | Sunday 15 February 2026 05:33:37 +0000 (0:00:03.774) 0:10:07.426 ******* 2026-02-15 05:34:44.268445 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:34:44.268457 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:34:44.268467 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:34:44.268478 | orchestrator | 2026-02-15 05:34:44.268490 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-02-15 05:34:44.268501 | orchestrator | Sunday 15 February 2026 05:33:44 +0000 (0:00:07.329) 0:10:14.756 ******* 2026-02-15 05:34:44.268533 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:34:44.268544 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:34:44.268554 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:34:44.268563 | orchestrator | 2026-02-15 05:34:44.268573 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-02-15 05:34:44.268582 | orchestrator | Sunday 15 February 2026 05:33:51 +0000 (0:00:06.863) 0:10:21.619 ******* 2026-02-15 05:34:44.268592 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:34:44.268609 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:34:44.268619 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:34:44.268628 | orchestrator | 2026-02-15 05:34:44.268637 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-02-15 05:34:44.268647 | orchestrator | Sunday 15 February 2026 05:33:58 +0000 (0:00:06.808) 0:10:28.428 ******* 2026-02-15 05:34:44.268657 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:34:44.268666 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:34:44.268676 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:34:44.268685 | orchestrator | 2026-02-15 05:34:44.268695 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-02-15 05:34:44.268704 | orchestrator | Sunday 15 February 2026 05:34:05 +0000 (0:00:06.928) 0:10:35.356 ******* 2026-02-15 05:34:44.268714 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:34:44.268724 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:34:44.268733 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:34:44.268743 | orchestrator | 2026-02-15 05:34:44.268752 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master haproxy to start] ************** 2026-02-15 05:34:44.268762 | orchestrator | Sunday 15 February 2026 05:34:12 +0000 (0:00:07.313) 0:10:42.670 ******* 2026-02-15 05:34:44.268771 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:34:44.268781 | orchestrator | 2026-02-15 05:34:44.268790 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-02-15 05:34:44.268800 | orchestrator | Sunday 15 February 2026 05:34:16 +0000 (0:00:03.633) 0:10:46.303 ******* 2026-02-15 05:34:44.268809 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:34:44.268819 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:34:44.268828 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:34:44.268838 | orchestrator | 2026-02-15 05:34:44.268852 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for master proxysql to start] ************* 2026-02-15 05:34:44.268862 | orchestrator | Sunday 15 February 2026 05:34:28 +0000 (0:00:12.358) 0:10:58.662 ******* 2026-02-15 05:34:44.268872 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:34:44.268881 | orchestrator | 2026-02-15 05:34:44.268890 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-02-15 05:34:44.268900 | orchestrator | Sunday 15 February 2026 05:34:32 +0000 (0:00:03.691) 0:11:02.353 ******* 2026-02-15 05:34:44.268909 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:34:44.268919 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:34:44.268929 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:34:44.268938 | orchestrator | 2026-02-15 05:34:44.268948 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-02-15 05:34:44.268957 | orchestrator | Sunday 15 February 2026 05:34:39 +0000 (0:00:06.967) 0:11:09.321 ******* 2026-02-15 05:34:44.268967 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:34:44.268976 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:34:44.268986 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:34:44.268995 | orchestrator | 2026-02-15 05:34:44.269005 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-02-15 05:34:44.269014 | orchestrator | Sunday 15 February 2026 05:34:41 +0000 (0:00:01.982) 0:11:11.303 ******* 2026-02-15 05:34:44.269024 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:34:44.269033 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:34:44.269043 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:34:44.269052 | orchestrator | 2026-02-15 05:34:44.269061 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:34:44.269072 | orchestrator | testbed-node-0 : ok=129  changed=29  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-15 05:34:44.269082 | orchestrator | testbed-node-1 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-15 05:34:44.269098 | orchestrator | testbed-node-2 : ok=128  changed=28  unreachable=0 failed=0 skipped=94  rescued=0 ignored=0 2026-02-15 05:34:45.216750 | orchestrator | 2026-02-15 05:34:45.216851 | orchestrator | 2026-02-15 05:34:45.216867 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:34:45.216880 | orchestrator | Sunday 15 February 2026 05:34:44 +0000 (0:00:02.826) 0:11:14.129 ******* 2026-02-15 05:34:45.216891 | orchestrator | =============================================================================== 2026-02-15 05:34:45.216902 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 13.36s 2026-02-15 05:34:45.216913 | orchestrator | loadbalancer : Start master proxysql container ------------------------- 12.36s 2026-02-15 05:34:45.216924 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 8.34s 2026-02-15 05:34:45.216934 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 7.83s 2026-02-15 05:34:45.216945 | orchestrator | loadbalancer : Stop backup haproxy container ---------------------------- 7.47s 2026-02-15 05:34:45.216955 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 7.33s 2026-02-15 05:34:45.216966 | orchestrator | loadbalancer : Start master haproxy container --------------------------- 7.31s 2026-02-15 05:34:45.216977 | orchestrator | loadbalancer : Stop backup keepalived container ------------------------- 7.12s 2026-02-15 05:34:45.216988 | orchestrator | loadbalancer : Stop backup proxysql container --------------------------- 7.08s 2026-02-15 05:34:45.216998 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 7.07s 2026-02-15 05:34:45.217010 | orchestrator | loadbalancer : Start master keepalived container ------------------------ 6.97s 2026-02-15 05:34:45.217021 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 6.96s 2026-02-15 05:34:45.217031 | orchestrator | loadbalancer : Stop master keepalived container ------------------------- 6.93s 2026-02-15 05:34:45.217042 | orchestrator | loadbalancer : Stop master haproxy container ---------------------------- 6.86s 2026-02-15 05:34:45.217053 | orchestrator | loadbalancer : Stop master proxysql container --------------------------- 6.81s 2026-02-15 05:34:45.217063 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 6.40s 2026-02-15 05:34:45.217074 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.98s 2026-02-15 05:34:45.217085 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.81s 2026-02-15 05:34:45.217095 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 5.72s 2026-02-15 05:34:45.217106 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.69s 2026-02-15 05:34:45.540385 | orchestrator | + osism apply -a upgrade opensearch 2026-02-15 05:34:47.624184 | orchestrator | 2026-02-15 05:34:47 | INFO  | Task e2295630-440d-4000-b2c9-70bbff17aea0 (opensearch) was prepared for execution. 2026-02-15 05:34:47.624297 | orchestrator | 2026-02-15 05:34:47 | INFO  | It takes a moment until task e2295630-440d-4000-b2c9-70bbff17aea0 (opensearch) has been started and output is visible here. 2026-02-15 05:35:07.018584 | orchestrator | 2026-02-15 05:35:07.018701 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 05:35:07.018709 | orchestrator | 2026-02-15 05:35:07.018714 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 05:35:07.018720 | orchestrator | Sunday 15 February 2026 05:34:53 +0000 (0:00:01.476) 0:00:01.476 ******* 2026-02-15 05:35:07.018725 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:35:07.018730 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:35:07.018734 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:35:07.018739 | orchestrator | 2026-02-15 05:35:07.018754 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 05:35:07.018759 | orchestrator | Sunday 15 February 2026 05:34:55 +0000 (0:00:02.266) 0:00:03.743 ******* 2026-02-15 05:35:07.018764 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-02-15 05:35:07.018769 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-02-15 05:35:07.018774 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-02-15 05:35:07.018791 | orchestrator | 2026-02-15 05:35:07.018795 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-02-15 05:35:07.018800 | orchestrator | 2026-02-15 05:35:07.018804 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-15 05:35:07.018808 | orchestrator | Sunday 15 February 2026 05:34:57 +0000 (0:00:02.131) 0:00:05.874 ******* 2026-02-15 05:35:07.018813 | orchestrator | included: /ansible/roles/opensearch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:35:07.018818 | orchestrator | 2026-02-15 05:35:07.018822 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-02-15 05:35:07.018827 | orchestrator | Sunday 15 February 2026 05:35:00 +0000 (0:00:02.723) 0:00:08.597 ******* 2026-02-15 05:35:07.018831 | orchestrator | ok: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-15 05:35:07.018836 | orchestrator | ok: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-15 05:35:07.018840 | orchestrator | ok: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-02-15 05:35:07.018844 | orchestrator | 2026-02-15 05:35:07.018848 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-02-15 05:35:07.018853 | orchestrator | Sunday 15 February 2026 05:35:02 +0000 (0:00:02.116) 0:00:10.714 ******* 2026-02-15 05:35:07.018859 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:07.018867 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:07.018882 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:07.018896 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:07.018902 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:07.018908 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:07.018912 | orchestrator | 2026-02-15 05:35:07.018917 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-15 05:35:07.018921 | orchestrator | Sunday 15 February 2026 05:35:05 +0000 (0:00:02.495) 0:00:13.209 ******* 2026-02-15 05:35:07.018926 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:35:07.018934 | orchestrator | 2026-02-15 05:35:07.018942 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-02-15 05:35:12.681106 | orchestrator | Sunday 15 February 2026 05:35:07 +0000 (0:00:01.744) 0:00:14.954 ******* 2026-02-15 05:35:12.681241 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:12.681262 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:12.681275 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:12.681289 | orchestrator | ok: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:12.681351 | orchestrator | ok: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:12.681366 | orchestrator | ok: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:12.681379 | orchestrator | 2026-02-15 05:35:12.681391 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-02-15 05:35:12.681403 | orchestrator | Sunday 15 February 2026 05:35:10 +0000 (0:00:03.757) 0:00:18.711 ******* 2026-02-15 05:35:12.681414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:35:12.681441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:35:14.642741 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:35:14.642868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:35:14.642890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:35:14.642905 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:35:14.642917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:35:14.642986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:35:14.643001 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:35:14.643013 | orchestrator | 2026-02-15 05:35:14.643025 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-02-15 05:35:14.643038 | orchestrator | Sunday 15 February 2026 05:35:12 +0000 (0:00:01.909) 0:00:20.621 ******* 2026-02-15 05:35:14.643049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:35:14.643061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:35:14.643074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:35:14.643093 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:35:14.643119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:35:19.418837 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:35:19.418950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:35:19.418972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:35:19.419010 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:35:19.419023 | orchestrator | 2026-02-15 05:35:19.419035 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-02-15 05:35:19.419047 | orchestrator | Sunday 15 February 2026 05:35:14 +0000 (0:00:01.959) 0:00:22.580 ******* 2026-02-15 05:35:19.419058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:19.419103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:19.419117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:19.419130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:19.419150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:19.419176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:33.114189 | orchestrator | 2026-02-15 05:35:33.114308 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-02-15 05:35:33.114324 | orchestrator | Sunday 15 February 2026 05:35:19 +0000 (0:00:04.776) 0:00:27.357 ******* 2026-02-15 05:35:33.114336 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:35:33.114346 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:35:33.114356 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:35:33.114366 | orchestrator | 2026-02-15 05:35:33.114376 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-02-15 05:35:33.114386 | orchestrator | Sunday 15 February 2026 05:35:22 +0000 (0:00:03.473) 0:00:30.830 ******* 2026-02-15 05:35:33.114396 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:35:33.114405 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:35:33.114415 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:35:33.114424 | orchestrator | 2026-02-15 05:35:33.114434 | orchestrator | TASK [service-check-containers : opensearch | Check containers] **************** 2026-02-15 05:35:33.114443 | orchestrator | Sunday 15 February 2026 05:35:25 +0000 (0:00:03.092) 0:00:33.923 ******* 2026-02-15 05:35:33.114456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:33.114490 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:33.114515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}}) 2026-02-15 05:35:33.114544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:33.114557 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:33.114578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}}) 2026-02-15 05:35:33.114589 | orchestrator | 2026-02-15 05:35:33.114599 | orchestrator | TASK [service-check-containers : opensearch | Notify handlers to restart containers] *** 2026-02-15 05:35:33.114609 | orchestrator | Sunday 15 February 2026 05:35:29 +0000 (0:00:03.519) 0:00:37.442 ******* 2026-02-15 05:35:33.114624 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:35:33.114635 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:35:33.114645 | orchestrator | } 2026-02-15 05:35:33.114655 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:35:33.114666 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:35:33.114677 | orchestrator | } 2026-02-15 05:35:33.114688 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:35:33.114699 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:35:33.114710 | orchestrator | } 2026-02-15 05:35:33.114748 | orchestrator | 2026-02-15 05:35:33.114761 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:35:33.114772 | orchestrator | Sunday 15 February 2026 05:35:30 +0000 (0:00:01.433) 0:00:38.876 ******* 2026-02-15 05:35:33.114791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:38:39.508885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:38:39.509032 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:38:39.509061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:38:39.509103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:38:39.509126 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:38:39.509172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch:2.19.4.20251208', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal'], 'backend_http_extra': ['option httpchk']}}}})  2026-02-15 05:38:39.509212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/2025.1/opensearch-dashboards:2.19.4.20251208', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password', 'backend_http_extra': ['option httpchk GET /api/status']}}}})  2026-02-15 05:38:39.509225 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:38:39.509236 | orchestrator | 2026-02-15 05:38:39.509248 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-15 05:38:39.509260 | orchestrator | Sunday 15 February 2026 05:35:33 +0000 (0:00:02.175) 0:00:41.052 ******* 2026-02-15 05:38:39.509271 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:38:39.509282 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:38:39.509293 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:38:39.509304 | orchestrator | 2026-02-15 05:38:39.509315 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-15 05:38:39.509325 | orchestrator | Sunday 15 February 2026 05:35:34 +0000 (0:00:01.536) 0:00:42.588 ******* 2026-02-15 05:38:39.509336 | orchestrator | 2026-02-15 05:38:39.509388 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-15 05:38:39.509412 | orchestrator | Sunday 15 February 2026 05:35:35 +0000 (0:00:00.445) 0:00:43.034 ******* 2026-02-15 05:38:39.509432 | orchestrator | 2026-02-15 05:38:39.509449 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-02-15 05:38:39.509463 | orchestrator | Sunday 15 February 2026 05:35:35 +0000 (0:00:00.464) 0:00:43.499 ******* 2026-02-15 05:38:39.509475 | orchestrator | 2026-02-15 05:38:39.509488 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-02-15 05:38:39.509500 | orchestrator | Sunday 15 February 2026 05:35:36 +0000 (0:00:00.789) 0:00:44.288 ******* 2026-02-15 05:38:39.509512 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:38:39.509525 | orchestrator | 2026-02-15 05:38:39.509537 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-02-15 05:38:39.509549 | orchestrator | Sunday 15 February 2026 05:35:40 +0000 (0:00:03.675) 0:00:47.964 ******* 2026-02-15 05:38:39.509561 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:38:39.509573 | orchestrator | 2026-02-15 05:38:39.509585 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-02-15 05:38:39.509598 | orchestrator | Sunday 15 February 2026 05:35:46 +0000 (0:00:06.555) 0:00:54.520 ******* 2026-02-15 05:38:39.509617 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:38:39.509630 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:38:39.509642 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:38:39.509655 | orchestrator | 2026-02-15 05:38:39.509668 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-02-15 05:38:39.509680 | orchestrator | Sunday 15 February 2026 05:36:55 +0000 (0:01:08.934) 0:02:03.455 ******* 2026-02-15 05:38:39.509706 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:38:39.509718 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:38:39.509729 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:38:39.509739 | orchestrator | 2026-02-15 05:38:39.509750 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-02-15 05:38:39.509761 | orchestrator | Sunday 15 February 2026 05:38:29 +0000 (0:01:34.223) 0:03:37.678 ******* 2026-02-15 05:38:39.509772 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:38:39.509783 | orchestrator | 2026-02-15 05:38:39.509794 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-02-15 05:38:39.509805 | orchestrator | Sunday 15 February 2026 05:38:31 +0000 (0:00:01.808) 0:03:39.487 ******* 2026-02-15 05:38:39.509815 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:38:39.509826 | orchestrator | 2026-02-15 05:38:39.509837 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-02-15 05:38:39.509847 | orchestrator | Sunday 15 February 2026 05:38:34 +0000 (0:00:03.377) 0:03:42.864 ******* 2026-02-15 05:38:39.509858 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:38:39.509869 | orchestrator | 2026-02-15 05:38:39.509880 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-02-15 05:38:39.509890 | orchestrator | Sunday 15 February 2026 05:38:38 +0000 (0:00:03.388) 0:03:46.253 ******* 2026-02-15 05:38:39.509901 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:38:39.509912 | orchestrator | 2026-02-15 05:38:39.509923 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-02-15 05:38:39.509943 | orchestrator | Sunday 15 February 2026 05:38:39 +0000 (0:00:01.187) 0:03:47.441 ******* 2026-02-15 05:38:41.883586 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:38:41.883672 | orchestrator | 2026-02-15 05:38:41.883683 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:38:41.883693 | orchestrator | testbed-node-0 : ok=19  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:38:41.883701 | orchestrator | testbed-node-1 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 05:38:41.883708 | orchestrator | testbed-node-2 : ok=15  changed=5  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 05:38:41.883715 | orchestrator | 2026-02-15 05:38:41.883721 | orchestrator | 2026-02-15 05:38:41.883728 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:38:41.883735 | orchestrator | Sunday 15 February 2026 05:38:41 +0000 (0:00:01.996) 0:03:49.438 ******* 2026-02-15 05:38:41.883741 | orchestrator | =============================================================================== 2026-02-15 05:38:41.883748 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 94.22s 2026-02-15 05:38:41.883754 | orchestrator | opensearch : Restart opensearch container ------------------------------ 68.94s 2026-02-15 05:38:41.883761 | orchestrator | opensearch : Perform a flush -------------------------------------------- 6.56s 2026-02-15 05:38:41.883767 | orchestrator | opensearch : Copying over config.json files for services ---------------- 4.78s 2026-02-15 05:38:41.883774 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.76s 2026-02-15 05:38:41.883780 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 3.68s 2026-02-15 05:38:41.883786 | orchestrator | service-check-containers : opensearch | Check containers ---------------- 3.52s 2026-02-15 05:38:41.883793 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.47s 2026-02-15 05:38:41.883799 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 3.39s 2026-02-15 05:38:41.883806 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.38s 2026-02-15 05:38:41.883831 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 3.09s 2026-02-15 05:38:41.883837 | orchestrator | opensearch : include_tasks ---------------------------------------------- 2.72s 2026-02-15 05:38:41.883844 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.50s 2026-02-15 05:38:41.883850 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.27s 2026-02-15 05:38:41.883857 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.18s 2026-02-15 05:38:41.883863 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.13s 2026-02-15 05:38:41.883870 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.12s 2026-02-15 05:38:41.883877 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.00s 2026-02-15 05:38:41.883883 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.96s 2026-02-15 05:38:41.883890 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.91s 2026-02-15 05:38:42.257731 | orchestrator | + osism apply -a upgrade memcached 2026-02-15 05:38:44.488831 | orchestrator | 2026-02-15 05:38:44 | INFO  | Task 64f5de32-8d5f-482b-8e63-e471d550dc31 (memcached) was prepared for execution. 2026-02-15 05:38:44.488941 | orchestrator | 2026-02-15 05:38:44 | INFO  | It takes a moment until task 64f5de32-8d5f-482b-8e63-e471d550dc31 (memcached) has been started and output is visible here. 2026-02-15 05:39:18.347910 | orchestrator | 2026-02-15 05:39:18.348024 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 05:39:18.348042 | orchestrator | 2026-02-15 05:39:18.348054 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 05:39:18.348066 | orchestrator | Sunday 15 February 2026 05:38:50 +0000 (0:00:01.896) 0:00:01.896 ******* 2026-02-15 05:39:18.348078 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:39:18.348090 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:39:18.348100 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:39:18.348111 | orchestrator | 2026-02-15 05:39:18.348122 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 05:39:18.348133 | orchestrator | Sunday 15 February 2026 05:38:52 +0000 (0:00:01.713) 0:00:03.609 ******* 2026-02-15 05:39:18.348144 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-02-15 05:39:18.348156 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-02-15 05:39:18.348167 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-02-15 05:39:18.348178 | orchestrator | 2026-02-15 05:39:18.348189 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-02-15 05:39:18.348199 | orchestrator | 2026-02-15 05:39:18.348210 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-02-15 05:39:18.348221 | orchestrator | Sunday 15 February 2026 05:38:55 +0000 (0:00:02.817) 0:00:06.427 ******* 2026-02-15 05:39:18.348232 | orchestrator | included: /ansible/roles/memcached/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:39:18.348244 | orchestrator | 2026-02-15 05:39:18.348255 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-02-15 05:39:18.348265 | orchestrator | Sunday 15 February 2026 05:38:57 +0000 (0:00:01.913) 0:00:08.341 ******* 2026-02-15 05:39:18.348276 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-15 05:39:18.348288 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-15 05:39:18.348300 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-15 05:39:18.348311 | orchestrator | 2026-02-15 05:39:18.348322 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-02-15 05:39:18.348333 | orchestrator | Sunday 15 February 2026 05:38:58 +0000 (0:00:01.712) 0:00:10.053 ******* 2026-02-15 05:39:18.348344 | orchestrator | ok: [testbed-node-1] => (item=memcached) 2026-02-15 05:39:18.348354 | orchestrator | ok: [testbed-node-0] => (item=memcached) 2026-02-15 05:39:18.348396 | orchestrator | ok: [testbed-node-2] => (item=memcached) 2026-02-15 05:39:18.348421 | orchestrator | 2026-02-15 05:39:18.348450 | orchestrator | TASK [service-check-containers : memcached | Check containers] ***************** 2026-02-15 05:39:18.348518 | orchestrator | Sunday 15 February 2026 05:39:01 +0000 (0:00:02.786) 0:00:12.839 ******* 2026-02-15 05:39:18.348541 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-15 05:39:18.348563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-15 05:39:18.348624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-02-15 05:39:18.348646 | orchestrator | 2026-02-15 05:39:18.348664 | orchestrator | TASK [service-check-containers : memcached | Notify handlers to restart containers] *** 2026-02-15 05:39:18.348682 | orchestrator | Sunday 15 February 2026 05:39:03 +0000 (0:00:02.258) 0:00:15.098 ******* 2026-02-15 05:39:18.348703 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:39:18.348723 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:39:18.348742 | orchestrator | } 2026-02-15 05:39:18.348760 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:39:18.348774 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:39:18.348800 | orchestrator | } 2026-02-15 05:39:18.348812 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:39:18.348822 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:39:18.348833 | orchestrator | } 2026-02-15 05:39:18.348844 | orchestrator | 2026-02-15 05:39:18.348855 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:39:18.348866 | orchestrator | Sunday 15 February 2026 05:39:05 +0000 (0:00:01.462) 0:00:16.560 ******* 2026-02-15 05:39:18.348877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-15 05:39:18.348900 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:39:18.348912 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-15 05:39:18.348923 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:39:18.348935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/2025.1/memcached:1.6.24.20251208', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-02-15 05:39:18.348946 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:39:18.348957 | orchestrator | 2026-02-15 05:39:18.348968 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-02-15 05:39:18.348978 | orchestrator | Sunday 15 February 2026 05:39:07 +0000 (0:00:02.142) 0:00:18.703 ******* 2026-02-15 05:39:18.348989 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:39:18.349000 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:39:18.349011 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:39:18.349022 | orchestrator | 2026-02-15 05:39:18.349032 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:39:18.349044 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 05:39:18.349056 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 05:39:18.349074 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 05:39:18.349085 | orchestrator | 2026-02-15 05:39:18.349096 | orchestrator | 2026-02-15 05:39:18.349107 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:39:18.349126 | orchestrator | Sunday 15 February 2026 05:39:18 +0000 (0:00:10.927) 0:00:29.631 ******* 2026-02-15 05:39:18.721699 | orchestrator | =============================================================================== 2026-02-15 05:39:18.721802 | orchestrator | memcached : Restart memcached container -------------------------------- 10.93s 2026-02-15 05:39:18.721817 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.82s 2026-02-15 05:39:18.721857 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.79s 2026-02-15 05:39:18.721869 | orchestrator | service-check-containers : memcached | Check containers ----------------- 2.26s 2026-02-15 05:39:18.721880 | orchestrator | service-check-containers : Include tasks -------------------------------- 2.14s 2026-02-15 05:39:18.721891 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.91s 2026-02-15 05:39:18.721902 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.71s 2026-02-15 05:39:18.721913 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.71s 2026-02-15 05:39:18.721924 | orchestrator | service-check-containers : memcached | Notify handlers to restart containers --- 1.46s 2026-02-15 05:39:19.113047 | orchestrator | + osism apply -a upgrade redis 2026-02-15 05:39:21.245336 | orchestrator | 2026-02-15 05:39:21 | INFO  | Task ed961ed4-147d-4b00-9f12-4eed3d5d96f6 (redis) was prepared for execution. 2026-02-15 05:39:21.245437 | orchestrator | 2026-02-15 05:39:21 | INFO  | It takes a moment until task ed961ed4-147d-4b00-9f12-4eed3d5d96f6 (redis) has been started and output is visible here. 2026-02-15 05:39:40.333738 | orchestrator | 2026-02-15 05:39:40.333856 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 05:39:40.333875 | orchestrator | 2026-02-15 05:39:40.333888 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 05:39:40.333908 | orchestrator | Sunday 15 February 2026 05:39:27 +0000 (0:00:01.877) 0:00:01.877 ******* 2026-02-15 05:39:40.333928 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:39:40.333948 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:39:40.333966 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:39:40.333984 | orchestrator | 2026-02-15 05:39:40.334002 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 05:39:40.334090 | orchestrator | Sunday 15 February 2026 05:39:29 +0000 (0:00:01.859) 0:00:03.737 ******* 2026-02-15 05:39:40.334104 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-02-15 05:39:40.334116 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-02-15 05:39:40.334127 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-02-15 05:39:40.334138 | orchestrator | 2026-02-15 05:39:40.334149 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-02-15 05:39:40.334160 | orchestrator | 2026-02-15 05:39:40.334171 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-02-15 05:39:40.334182 | orchestrator | Sunday 15 February 2026 05:39:31 +0000 (0:00:02.661) 0:00:06.401 ******* 2026-02-15 05:39:40.334193 | orchestrator | included: /ansible/roles/redis/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:39:40.334204 | orchestrator | 2026-02-15 05:39:40.334215 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-02-15 05:39:40.334227 | orchestrator | Sunday 15 February 2026 05:39:34 +0000 (0:00:02.727) 0:00:09.128 ******* 2026-02-15 05:39:40.334242 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:40.334260 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:40.334320 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:40.334343 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:40.334388 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:40.334411 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:40.334432 | orchestrator | 2026-02-15 05:39:40.334454 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-02-15 05:39:40.334467 | orchestrator | Sunday 15 February 2026 05:39:37 +0000 (0:00:02.445) 0:00:11.574 ******* 2026-02-15 05:39:40.334481 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:40.334494 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:40.334558 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:40.334574 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:40.334594 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.418844 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.418969 | orchestrator | 2026-02-15 05:39:47.418988 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-02-15 05:39:47.419002 | orchestrator | Sunday 15 February 2026 05:39:40 +0000 (0:00:03.176) 0:00:14.750 ******* 2026-02-15 05:39:47.419016 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419029 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419067 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419093 | orchestrator | ok: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419105 | orchestrator | ok: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419135 | orchestrator | ok: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419147 | orchestrator | 2026-02-15 05:39:47.419159 | orchestrator | TASK [service-check-containers : redis | Check containers] ********************* 2026-02-15 05:39:47.419169 | orchestrator | Sunday 15 February 2026 05:39:44 +0000 (0:00:03.921) 0:00:18.672 ******* 2026-02-15 05:39:47.419181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:39:47.419261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-02-15 05:40:15.905859 | orchestrator | 2026-02-15 05:40:15.906100 | orchestrator | TASK [service-check-containers : redis | Notify handlers to restart containers] *** 2026-02-15 05:40:15.906139 | orchestrator | Sunday 15 February 2026 05:39:47 +0000 (0:00:03.164) 0:00:21.837 ******* 2026-02-15 05:40:15.906162 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:40:15.906186 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:40:15.906208 | orchestrator | } 2026-02-15 05:40:15.906230 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:40:15.906252 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:40:15.906326 | orchestrator | } 2026-02-15 05:40:15.906347 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:40:15.906365 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:40:15.906384 | orchestrator | } 2026-02-15 05:40:15.906403 | orchestrator | 2026-02-15 05:40:15.906423 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:40:15.906442 | orchestrator | Sunday 15 February 2026 05:39:48 +0000 (0:00:01.584) 0:00:23.421 ******* 2026-02-15 05:40:15.906466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-15 05:40:15.906488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-15 05:40:15.906510 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:40:15.906681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-15 05:40:15.906714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-15 05:40:15.906726 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:40:15.906741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis:7.0.15.20251208', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}})  2026-02-15 05:40:15.906792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/redis-sentinel:7.0.15.20251208', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}})  2026-02-15 05:40:15.906829 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:40:15.906847 | orchestrator | 2026-02-15 05:40:15.906865 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-15 05:40:15.906883 | orchestrator | Sunday 15 February 2026 05:39:50 +0000 (0:00:01.844) 0:00:25.266 ******* 2026-02-15 05:40:15.906899 | orchestrator | 2026-02-15 05:40:15.906914 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-15 05:40:15.906931 | orchestrator | Sunday 15 February 2026 05:39:51 +0000 (0:00:00.444) 0:00:25.710 ******* 2026-02-15 05:40:15.906950 | orchestrator | 2026-02-15 05:40:15.906966 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-02-15 05:40:15.906985 | orchestrator | Sunday 15 February 2026 05:39:51 +0000 (0:00:00.436) 0:00:26.147 ******* 2026-02-15 05:40:15.907002 | orchestrator | 2026-02-15 05:40:15.907021 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-02-15 05:40:15.907039 | orchestrator | Sunday 15 February 2026 05:39:52 +0000 (0:00:00.791) 0:00:26.939 ******* 2026-02-15 05:40:15.907058 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:40:15.907075 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:40:15.907094 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:40:15.907112 | orchestrator | 2026-02-15 05:40:15.907131 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-02-15 05:40:15.907150 | orchestrator | Sunday 15 February 2026 05:40:03 +0000 (0:00:11.132) 0:00:38.071 ******* 2026-02-15 05:40:15.907169 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:40:15.907186 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:40:15.907203 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:40:15.907219 | orchestrator | 2026-02-15 05:40:15.907237 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:40:15.907256 | orchestrator | testbed-node-0 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 05:40:15.907275 | orchestrator | testbed-node-1 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 05:40:15.907303 | orchestrator | testbed-node-2 : ok=10  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 05:40:15.907324 | orchestrator | 2026-02-15 05:40:15.907340 | orchestrator | 2026-02-15 05:40:15.907358 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:40:15.907375 | orchestrator | Sunday 15 February 2026 05:40:15 +0000 (0:00:11.814) 0:00:49.885 ******* 2026-02-15 05:40:15.907393 | orchestrator | =============================================================================== 2026-02-15 05:40:15.907411 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 11.81s 2026-02-15 05:40:15.907430 | orchestrator | redis : Restart redis container ---------------------------------------- 11.13s 2026-02-15 05:40:15.907448 | orchestrator | redis : Copying over redis config files --------------------------------- 3.92s 2026-02-15 05:40:15.907466 | orchestrator | redis : Copying over default config.json files -------------------------- 3.18s 2026-02-15 05:40:15.907484 | orchestrator | service-check-containers : redis | Check containers --------------------- 3.16s 2026-02-15 05:40:15.907502 | orchestrator | redis : include_tasks --------------------------------------------------- 2.73s 2026-02-15 05:40:15.907520 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.66s 2026-02-15 05:40:15.907554 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.45s 2026-02-15 05:40:15.907572 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.86s 2026-02-15 05:40:15.907591 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.84s 2026-02-15 05:40:15.907691 | orchestrator | redis : Flush handlers -------------------------------------------------- 1.67s 2026-02-15 05:40:15.907712 | orchestrator | service-check-containers : redis | Notify handlers to restart containers --- 1.58s 2026-02-15 05:40:16.257806 | orchestrator | + osism apply -a upgrade mariadb 2026-02-15 05:40:18.452332 | orchestrator | 2026-02-15 05:40:18 | INFO  | Task 5a791705-db74-49e5-8507-9338672c9326 (mariadb) was prepared for execution. 2026-02-15 05:40:18.452411 | orchestrator | 2026-02-15 05:40:18 | INFO  | It takes a moment until task 5a791705-db74-49e5-8507-9338672c9326 (mariadb) has been started and output is visible here. 2026-02-15 05:40:32.783383 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-15 05:40:32.783501 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-15 05:40:32.783528 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-15 05:40:32.783539 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-15 05:40:32.783562 | orchestrator | 2026-02-15 05:40:32.783574 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 05:40:32.783584 | orchestrator | 2026-02-15 05:40:32.783595 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 05:40:32.783606 | orchestrator | Sunday 15 February 2026 05:40:23 +0000 (0:00:01.003) 0:00:01.003 ******* 2026-02-15 05:40:32.783617 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:40:32.783629 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:40:32.783639 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:40:32.783701 | orchestrator | 2026-02-15 05:40:32.783713 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 05:40:32.783724 | orchestrator | Sunday 15 February 2026 05:40:24 +0000 (0:00:01.024) 0:00:02.027 ******* 2026-02-15 05:40:32.783735 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-02-15 05:40:32.783747 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-02-15 05:40:32.783758 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-02-15 05:40:32.783768 | orchestrator | 2026-02-15 05:40:32.783779 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-02-15 05:40:32.783790 | orchestrator | 2026-02-15 05:40:32.783801 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-02-15 05:40:32.783812 | orchestrator | Sunday 15 February 2026 05:40:25 +0000 (0:00:01.010) 0:00:03.037 ******* 2026-02-15 05:40:32.783823 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:40:32.783834 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-15 05:40:32.783844 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-15 05:40:32.783855 | orchestrator | 2026-02-15 05:40:32.783866 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-15 05:40:32.783877 | orchestrator | Sunday 15 February 2026 05:40:26 +0000 (0:00:00.407) 0:00:03.445 ******* 2026-02-15 05:40:32.783888 | orchestrator | included: /ansible/roles/mariadb/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:40:32.783900 | orchestrator | 2026-02-15 05:40:32.783912 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-02-15 05:40:32.783925 | orchestrator | Sunday 15 February 2026 05:40:27 +0000 (0:00:01.315) 0:00:04.760 ******* 2026-02-15 05:40:32.783962 | orchestrator | ok: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 05:40:32.784027 | orchestrator | ok: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 05:40:32.784050 | orchestrator | ok: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 05:40:32.784072 | orchestrator | 2026-02-15 05:40:32.784085 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-02-15 05:40:32.784097 | orchestrator | Sunday 15 February 2026 05:40:30 +0000 (0:00:03.332) 0:00:08.093 ******* 2026-02-15 05:40:32.784109 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:40:32.784122 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:40:32.784134 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:40:32.784147 | orchestrator | 2026-02-15 05:40:32.784159 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-02-15 05:40:32.784171 | orchestrator | Sunday 15 February 2026 05:40:31 +0000 (0:00:00.600) 0:00:08.693 ******* 2026-02-15 05:40:32.784183 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:40:32.784194 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:40:32.784206 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:40:32.784219 | orchestrator | 2026-02-15 05:40:32.784231 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-02-15 05:40:32.784250 | orchestrator | Sunday 15 February 2026 05:40:32 +0000 (0:00:01.192) 0:00:09.886 ******* 2026-02-15 05:40:45.236235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 05:40:45.236368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 05:40:45.236400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 05:40:45.236417 | orchestrator | 2026-02-15 05:40:45.236427 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-02-15 05:40:45.236436 | orchestrator | Sunday 15 February 2026 05:40:36 +0000 (0:00:03.530) 0:00:13.416 ******* 2026-02-15 05:40:45.236444 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:40:45.236453 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:40:45.236461 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:40:45.236470 | orchestrator | 2026-02-15 05:40:45.236478 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-02-15 05:40:45.236486 | orchestrator | Sunday 15 February 2026 05:40:37 +0000 (0:00:01.047) 0:00:14.464 ******* 2026-02-15 05:40:45.236494 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:40:45.236502 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:40:45.236510 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:40:45.236517 | orchestrator | 2026-02-15 05:40:45.236525 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-15 05:40:45.236533 | orchestrator | Sunday 15 February 2026 05:40:41 +0000 (0:00:04.028) 0:00:18.492 ******* 2026-02-15 05:40:45.236542 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:40:45.236550 | orchestrator | 2026-02-15 05:40:45.236557 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-02-15 05:40:45.236565 | orchestrator | Sunday 15 February 2026 05:40:42 +0000 (0:00:01.106) 0:00:19.598 ******* 2026-02-15 05:40:45.236584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:47.734939 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:40:47.735073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:47.735127 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:40:47.735160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:47.735173 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:40:47.735184 | orchestrator | 2026-02-15 05:40:47.735196 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-02-15 05:40:47.735208 | orchestrator | Sunday 15 February 2026 05:40:45 +0000 (0:00:02.740) 0:00:22.338 ******* 2026-02-15 05:40:47.735243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:47.735267 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:40:47.735284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:47.735296 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:40:47.735319 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:54.507846 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:40:54.507988 | orchestrator | 2026-02-15 05:40:54.508006 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-02-15 05:40:54.508019 | orchestrator | Sunday 15 February 2026 05:40:47 +0000 (0:00:02.498) 0:00:24.837 ******* 2026-02-15 05:40:54.508057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:54.508075 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:40:54.508087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:54.508125 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:40:54.508164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:54.508179 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:40:54.508190 | orchestrator | 2026-02-15 05:40:54.508201 | orchestrator | TASK [service-check-containers : mariadb | Check containers] ******************* 2026-02-15 05:40:54.508212 | orchestrator | Sunday 15 February 2026 05:40:51 +0000 (0:00:03.408) 0:00:28.245 ******* 2026-02-15 05:40:54.508224 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 05:40:54.508261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 05:40:58.212292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-02-15 05:40:58.212464 | orchestrator | 2026-02-15 05:40:58.212486 | orchestrator | TASK [service-check-containers : mariadb | Notify handlers to restart containers] *** 2026-02-15 05:40:58.212500 | orchestrator | Sunday 15 February 2026 05:40:54 +0000 (0:00:03.369) 0:00:31.615 ******* 2026-02-15 05:40:58.212512 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:40:58.212524 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:40:58.212535 | orchestrator | } 2026-02-15 05:40:58.212547 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:40:58.212557 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:40:58.212568 | orchestrator | } 2026-02-15 05:40:58.212579 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:40:58.212590 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:40:58.212601 | orchestrator | } 2026-02-15 05:40:58.212612 | orchestrator | 2026-02-15 05:40:58.212623 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:40:58.212634 | orchestrator | Sunday 15 February 2026 05:40:54 +0000 (0:00:00.377) 0:00:31.993 ******* 2026-02-15 05:40:58.212694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:58.212788 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:40:58.212813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:58.212840 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:40:58.212861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:40:58.212874 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:40:58.212885 | orchestrator | 2026-02-15 05:40:58.212896 | orchestrator | TASK [mariadb : Checking for mariadb cluster] ********************************** 2026-02-15 05:40:58.212923 | orchestrator | Sunday 15 February 2026 05:40:58 +0000 (0:00:03.315) 0:00:35.308 ******* 2026-02-15 05:41:07.496828 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.496967 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.496979 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.496987 | orchestrator | 2026-02-15 05:41:07.496996 | orchestrator | TASK [mariadb : Cleaning up temp file on localhost] **************************** 2026-02-15 05:41:07.497028 | orchestrator | Sunday 15 February 2026 05:40:58 +0000 (0:00:00.392) 0:00:35.700 ******* 2026-02-15 05:41:07.497036 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497044 | orchestrator | 2026-02-15 05:41:07.497051 | orchestrator | TASK [mariadb : Stop MariaDB containers] *************************************** 2026-02-15 05:41:07.497059 | orchestrator | Sunday 15 February 2026 05:40:58 +0000 (0:00:00.127) 0:00:35.828 ******* 2026-02-15 05:41:07.497066 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497073 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497080 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497087 | orchestrator | 2026-02-15 05:41:07.497095 | orchestrator | TASK [mariadb : Run MariaDB wsrep recovery] ************************************ 2026-02-15 05:41:07.497102 | orchestrator | Sunday 15 February 2026 05:40:59 +0000 (0:00:00.356) 0:00:36.185 ******* 2026-02-15 05:41:07.497109 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497117 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497124 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497132 | orchestrator | 2026-02-15 05:41:07.497139 | orchestrator | TASK [mariadb : Copying MariaDB log file to /tmp] ****************************** 2026-02-15 05:41:07.497147 | orchestrator | Sunday 15 February 2026 05:40:59 +0000 (0:00:00.594) 0:00:36.780 ******* 2026-02-15 05:41:07.497154 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497161 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497168 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497177 | orchestrator | 2026-02-15 05:41:07.497185 | orchestrator | TASK [mariadb : Get MariaDB wsrep recovery seqno] ****************************** 2026-02-15 05:41:07.497194 | orchestrator | Sunday 15 February 2026 05:41:00 +0000 (0:00:00.374) 0:00:37.155 ******* 2026-02-15 05:41:07.497202 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497211 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497219 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497228 | orchestrator | 2026-02-15 05:41:07.497236 | orchestrator | TASK [mariadb : Removing MariaDB log file from /tmp] *************************** 2026-02-15 05:41:07.497245 | orchestrator | Sunday 15 February 2026 05:41:00 +0000 (0:00:00.360) 0:00:37.516 ******* 2026-02-15 05:41:07.497253 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497262 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497271 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497281 | orchestrator | 2026-02-15 05:41:07.497292 | orchestrator | TASK [mariadb : Registering MariaDB seqno variable] **************************** 2026-02-15 05:41:07.497303 | orchestrator | Sunday 15 February 2026 05:41:00 +0000 (0:00:00.355) 0:00:37.871 ******* 2026-02-15 05:41:07.497313 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497323 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497334 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497344 | orchestrator | 2026-02-15 05:41:07.497354 | orchestrator | TASK [mariadb : Comparing seqno value on all mariadb hosts] ******************** 2026-02-15 05:41:07.497365 | orchestrator | Sunday 15 February 2026 05:41:01 +0000 (0:00:00.572) 0:00:38.443 ******* 2026-02-15 05:41:07.497379 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 05:41:07.497395 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 05:41:07.497409 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 05:41:07.497424 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497439 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 05:41:07.497453 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 05:41:07.497469 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 05:41:07.497485 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497497 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 05:41:07.497505 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 05:41:07.497516 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 05:41:07.497541 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497556 | orchestrator | 2026-02-15 05:41:07.497571 | orchestrator | TASK [mariadb : Writing hostname of host with the largest seqno to temp file] *** 2026-02-15 05:41:07.497586 | orchestrator | Sunday 15 February 2026 05:41:01 +0000 (0:00:00.409) 0:00:38.853 ******* 2026-02-15 05:41:07.497602 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497612 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497621 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497630 | orchestrator | 2026-02-15 05:41:07.497646 | orchestrator | TASK [mariadb : Registering mariadb_recover_inventory_name from temp file] ***** 2026-02-15 05:41:07.497660 | orchestrator | Sunday 15 February 2026 05:41:02 +0000 (0:00:00.398) 0:00:39.252 ******* 2026-02-15 05:41:07.497675 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497690 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497704 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497719 | orchestrator | 2026-02-15 05:41:07.497793 | orchestrator | TASK [mariadb : Store bootstrap and master hostnames into facts] *************** 2026-02-15 05:41:07.497811 | orchestrator | Sunday 15 February 2026 05:41:02 +0000 (0:00:00.550) 0:00:39.803 ******* 2026-02-15 05:41:07.497827 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497842 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497857 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497872 | orchestrator | 2026-02-15 05:41:07.497886 | orchestrator | TASK [mariadb : Set grastate.dat file from MariaDB container in bootstrap host] *** 2026-02-15 05:41:07.497903 | orchestrator | Sunday 15 February 2026 05:41:03 +0000 (0:00:00.366) 0:00:40.169 ******* 2026-02-15 05:41:07.497918 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.497932 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.497949 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.497965 | orchestrator | 2026-02-15 05:41:07.497981 | orchestrator | TASK [mariadb : Starting first MariaDB container] ****************************** 2026-02-15 05:41:07.498094 | orchestrator | Sunday 15 February 2026 05:41:03 +0000 (0:00:00.351) 0:00:40.521 ******* 2026-02-15 05:41:07.498117 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.498132 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.498149 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.498164 | orchestrator | 2026-02-15 05:41:07.498181 | orchestrator | TASK [mariadb : Wait for first MariaDB container] ****************************** 2026-02-15 05:41:07.498197 | orchestrator | Sunday 15 February 2026 05:41:03 +0000 (0:00:00.338) 0:00:40.859 ******* 2026-02-15 05:41:07.498212 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.498227 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.498243 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.498259 | orchestrator | 2026-02-15 05:41:07.498275 | orchestrator | TASK [mariadb : Set first MariaDB container as primary] ************************ 2026-02-15 05:41:07.498290 | orchestrator | Sunday 15 February 2026 05:41:04 +0000 (0:00:00.577) 0:00:41.437 ******* 2026-02-15 05:41:07.498306 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.498318 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.498327 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.498335 | orchestrator | 2026-02-15 05:41:07.498344 | orchestrator | TASK [mariadb : Wait for MariaDB to become operational] ************************ 2026-02-15 05:41:07.498352 | orchestrator | Sunday 15 February 2026 05:41:04 +0000 (0:00:00.345) 0:00:41.782 ******* 2026-02-15 05:41:07.498361 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.498370 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:07.498378 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:07.498387 | orchestrator | 2026-02-15 05:41:07.498396 | orchestrator | TASK [mariadb : Restart slave MariaDB container(s)] **************************** 2026-02-15 05:41:07.498404 | orchestrator | Sunday 15 February 2026 05:41:05 +0000 (0:00:00.375) 0:00:42.157 ******* 2026-02-15 05:41:07.498420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:41:07.498443 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:07.498472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:41:10.619663 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:10.619828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:41:10.619877 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:10.619889 | orchestrator | 2026-02-15 05:41:10.619900 | orchestrator | TASK [mariadb : Wait for slave MariaDB] **************************************** 2026-02-15 05:41:10.619911 | orchestrator | Sunday 15 February 2026 05:41:07 +0000 (0:00:02.444) 0:00:44.602 ******* 2026-02-15 05:41:10.619921 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:10.619930 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:10.619940 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:10.619949 | orchestrator | 2026-02-15 05:41:10.619959 | orchestrator | TASK [mariadb : Restart master MariaDB container(s)] *************************** 2026-02-15 05:41:10.619968 | orchestrator | Sunday 15 February 2026 05:41:08 +0000 (0:00:00.588) 0:00:45.191 ******* 2026-02-15 05:41:10.620016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:41:10.620038 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:41:10.620049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:41:10.620060 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:41:10.620075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/mariadb-server:10.11.15.20251208', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-02-15 05:41:10.620086 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:41:10.620096 | orchestrator | 2026-02-15 05:41:10.620106 | orchestrator | TASK [mariadb : Wait for master mariadb] *************************************** 2026-02-15 05:41:10.620122 | orchestrator | Sunday 15 February 2026 05:41:10 +0000 (0:00:02.338) 0:00:47.529 ******* 2026-02-15 05:41:10.620138 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:10.307871 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.308037 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.308055 | orchestrator | 2026-02-15 05:43:10.308069 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-15 05:43:10.308082 | orchestrator | Sunday 15 February 2026 05:41:11 +0000 (0:00:00.713) 0:00:48.243 ******* 2026-02-15 05:43:10.308093 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:10.308104 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.308115 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.308126 | orchestrator | 2026-02-15 05:43:10.308137 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-15 05:43:10.308149 | orchestrator | Sunday 15 February 2026 05:41:11 +0000 (0:00:00.572) 0:00:48.815 ******* 2026-02-15 05:43:10.308160 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:10.308171 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.308182 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.308193 | orchestrator | 2026-02-15 05:43:10.308203 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-15 05:43:10.308214 | orchestrator | Sunday 15 February 2026 05:41:12 +0000 (0:00:00.377) 0:00:49.193 ******* 2026-02-15 05:43:10.308225 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:10.308236 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.308246 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.308257 | orchestrator | 2026-02-15 05:43:10.308268 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-15 05:43:10.308279 | orchestrator | Sunday 15 February 2026 05:41:13 +0000 (0:00:00.987) 0:00:50.181 ******* 2026-02-15 05:43:10.308290 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:10.308301 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.308311 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.308322 | orchestrator | 2026-02-15 05:43:10.308333 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-02-15 05:43:10.308343 | orchestrator | Sunday 15 February 2026 05:41:14 +0000 (0:00:01.025) 0:00:51.206 ******* 2026-02-15 05:43:10.308354 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:10.308365 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:10.308376 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:10.308387 | orchestrator | 2026-02-15 05:43:10.308399 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-02-15 05:43:10.308412 | orchestrator | Sunday 15 February 2026 05:41:15 +0000 (0:00:00.952) 0:00:52.158 ******* 2026-02-15 05:43:10.308424 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:10.308436 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:10.308449 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:10.308460 | orchestrator | 2026-02-15 05:43:10.308473 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-02-15 05:43:10.308486 | orchestrator | Sunday 15 February 2026 05:41:15 +0000 (0:00:00.378) 0:00:52.537 ******* 2026-02-15 05:43:10.308498 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:10.308510 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:10.308522 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:10.308535 | orchestrator | 2026-02-15 05:43:10.308547 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-02-15 05:43:10.308560 | orchestrator | Sunday 15 February 2026 05:41:15 +0000 (0:00:00.351) 0:00:52.888 ******* 2026-02-15 05:43:10.308572 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:10.308584 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:10.308596 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:10.308609 | orchestrator | 2026-02-15 05:43:10.308620 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-02-15 05:43:10.308631 | orchestrator | Sunday 15 February 2026 05:41:16 +0000 (0:00:01.060) 0:00:53.949 ******* 2026-02-15 05:43:10.308669 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:10.308680 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:10.308691 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:10.308702 | orchestrator | 2026-02-15 05:43:10.308712 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-02-15 05:43:10.308723 | orchestrator | Sunday 15 February 2026 05:41:17 +0000 (0:00:00.377) 0:00:54.327 ******* 2026-02-15 05:43:10.308733 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:10.308758 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.308769 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.308780 | orchestrator | 2026-02-15 05:43:10.308790 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-02-15 05:43:10.308801 | orchestrator | Sunday 15 February 2026 05:41:17 +0000 (0:00:00.379) 0:00:54.707 ******* 2026-02-15 05:43:10.308812 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:10.308822 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:10.308833 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:10.308843 | orchestrator | 2026-02-15 05:43:10.308854 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-02-15 05:43:10.308864 | orchestrator | Sunday 15 February 2026 05:41:19 +0000 (0:00:02.280) 0:00:56.988 ******* 2026-02-15 05:43:10.308875 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:10.308886 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:10.308896 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:10.308915 | orchestrator | 2026-02-15 05:43:10.308932 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-02-15 05:43:10.308951 | orchestrator | Sunday 15 February 2026 05:41:20 +0000 (0:00:00.690) 0:00:57.679 ******* 2026-02-15 05:43:10.309018 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:10.309039 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:10.309057 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:10.309075 | orchestrator | 2026-02-15 05:43:10.309093 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-02-15 05:43:10.309111 | orchestrator | Sunday 15 February 2026 05:41:20 +0000 (0:00:00.341) 0:00:58.020 ******* 2026-02-15 05:43:10.309129 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:10.309148 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.309164 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.309174 | orchestrator | 2026-02-15 05:43:10.309185 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-15 05:43:10.309196 | orchestrator | Sunday 15 February 2026 05:41:21 +0000 (0:00:00.730) 0:00:58.751 ******* 2026-02-15 05:43:10.309207 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:10.309218 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.309228 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.309257 | orchestrator | 2026-02-15 05:43:10.309268 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-02-15 05:43:10.309279 | orchestrator | Sunday 15 February 2026 05:41:22 +0000 (0:00:00.615) 0:00:59.367 ******* 2026-02-15 05:43:10.309290 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:10.309301 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-15 05:43:10.309312 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-15 05:43:10.309333 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.309344 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.309355 | orchestrator | 2026-02-15 05:43:10.309365 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-02-15 05:43:10.309376 | orchestrator | Sunday 15 February 2026 05:41:23 +0000 (0:00:00.805) 0:01:00.173 ******* 2026-02-15 05:43:10.309387 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:43:10.309398 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:43:10.309426 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:43:10.309446 | orchestrator | 2026-02-15 05:43:10.309467 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-02-15 05:43:10.309486 | orchestrator | Sunday 15 February 2026 05:41:23 +0000 (0:00:00.646) 0:01:00.819 ******* 2026-02-15 05:43:10.309505 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:10.309521 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.309536 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.309547 | orchestrator | 2026-02-15 05:43:10.309558 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-15 05:43:10.309569 | orchestrator | 2026-02-15 05:43:10.309579 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-15 05:43:10.309590 | orchestrator | Sunday 15 February 2026 05:41:24 +0000 (0:00:00.777) 0:01:01.597 ******* 2026-02-15 05:43:10.309601 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:43:10.309611 | orchestrator | 2026-02-15 05:43:10.309622 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-15 05:43:10.309633 | orchestrator | Sunday 15 February 2026 05:41:50 +0000 (0:00:25.739) 0:01:27.336 ******* 2026-02-15 05:43:10.309643 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:10.309654 | orchestrator | 2026-02-15 05:43:10.309665 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-15 05:43:10.309676 | orchestrator | Sunday 15 February 2026 05:41:55 +0000 (0:00:05.585) 0:01:32.922 ******* 2026-02-15 05:43:10.309686 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:10.309697 | orchestrator | 2026-02-15 05:43:10.309708 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-15 05:43:10.309719 | orchestrator | 2026-02-15 05:43:10.309729 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-15 05:43:10.309740 | orchestrator | Sunday 15 February 2026 05:41:58 +0000 (0:00:02.588) 0:01:35.510 ******* 2026-02-15 05:43:10.309751 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:43:10.309761 | orchestrator | 2026-02-15 05:43:10.309772 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-15 05:43:10.309783 | orchestrator | Sunday 15 February 2026 05:42:23 +0000 (0:00:25.525) 0:02:01.036 ******* 2026-02-15 05:43:10.309793 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:10.309804 | orchestrator | 2026-02-15 05:43:10.309815 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-15 05:43:10.309826 | orchestrator | Sunday 15 February 2026 05:42:29 +0000 (0:00:05.636) 0:02:06.673 ******* 2026-02-15 05:43:10.309837 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:10.309848 | orchestrator | 2026-02-15 05:43:10.309858 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-02-15 05:43:10.309869 | orchestrator | 2026-02-15 05:43:10.309879 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-02-15 05:43:10.309890 | orchestrator | Sunday 15 February 2026 05:42:32 +0000 (0:00:02.994) 0:02:09.667 ******* 2026-02-15 05:43:10.309901 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:43:10.309911 | orchestrator | 2026-02-15 05:43:10.309930 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-02-15 05:43:10.309941 | orchestrator | Sunday 15 February 2026 05:42:58 +0000 (0:00:25.486) 0:02:35.154 ******* 2026-02-15 05:43:10.309952 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:10.309963 | orchestrator | 2026-02-15 05:43:10.310004 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-02-15 05:43:10.310079 | orchestrator | Sunday 15 February 2026 05:43:03 +0000 (0:00:05.624) 0:02:40.778 ******* 2026-02-15 05:43:10.310092 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2026-02-15 05:43:10.310103 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-02-15 05:43:10.310114 | orchestrator | mariadb_bootstrap_restart 2026-02-15 05:43:10.310125 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:10.310136 | orchestrator | 2026-02-15 05:43:10.310146 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-02-15 05:43:10.310166 | orchestrator | skipping: no hosts matched 2026-02-15 05:43:10.310177 | orchestrator | 2026-02-15 05:43:10.310188 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-02-15 05:43:10.310199 | orchestrator | skipping: no hosts matched 2026-02-15 05:43:10.310209 | orchestrator | 2026-02-15 05:43:10.310220 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-02-15 05:43:10.310231 | orchestrator | 2026-02-15 05:43:10.310242 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-02-15 05:43:10.310252 | orchestrator | Sunday 15 February 2026 05:43:07 +0000 (0:00:03.381) 0:02:44.160 ******* 2026-02-15 05:43:10.310263 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:43:10.310274 | orchestrator | 2026-02-15 05:43:10.310285 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-02-15 05:43:10.310296 | orchestrator | Sunday 15 February 2026 05:43:08 +0000 (0:00:01.159) 0:02:45.319 ******* 2026-02-15 05:43:10.310306 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:10.310317 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:10.310338 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:47.564685 | orchestrator | 2026-02-15 05:43:47.564800 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-02-15 05:43:47.564817 | orchestrator | Sunday 15 February 2026 05:43:10 +0000 (0:00:02.086) 0:02:47.406 ******* 2026-02-15 05:43:47.564830 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:47.564843 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:47.564854 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:43:47.564865 | orchestrator | 2026-02-15 05:43:47.564877 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-02-15 05:43:47.564888 | orchestrator | Sunday 15 February 2026 05:43:12 +0000 (0:00:02.174) 0:02:49.580 ******* 2026-02-15 05:43:47.564899 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:47.564910 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:47.564921 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:47.564933 | orchestrator | 2026-02-15 05:43:47.564944 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-02-15 05:43:47.564955 | orchestrator | Sunday 15 February 2026 05:43:14 +0000 (0:00:02.265) 0:02:51.846 ******* 2026-02-15 05:43:47.564966 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:47.564977 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:47.564988 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:43:47.564999 | orchestrator | 2026-02-15 05:43:47.565010 | orchestrator | TASK [service-check : mariadb | Get container facts] *************************** 2026-02-15 05:43:47.565021 | orchestrator | Sunday 15 February 2026 05:43:16 +0000 (0:00:02.077) 0:02:53.924 ******* 2026-02-15 05:43:47.565081 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:47.565095 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:47.565106 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:47.565117 | orchestrator | 2026-02-15 05:43:47.565128 | orchestrator | TASK [service-check : mariadb | Fail if containers are missing or not running] *** 2026-02-15 05:43:47.565140 | orchestrator | Sunday 15 February 2026 05:43:22 +0000 (0:00:05.394) 0:02:59.318 ******* 2026-02-15 05:43:47.565151 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:47.565162 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:47.565173 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:47.565184 | orchestrator | 2026-02-15 05:43:47.565195 | orchestrator | TASK [service-check : mariadb | Fail if containers are unhealthy] ************** 2026-02-15 05:43:47.565208 | orchestrator | Sunday 15 February 2026 05:43:24 +0000 (0:00:02.612) 0:03:01.930 ******* 2026-02-15 05:43:47.565220 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:43:47.565233 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:43:47.565246 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:43:47.565258 | orchestrator | 2026-02-15 05:43:47.565271 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-02-15 05:43:47.565310 | orchestrator | Sunday 15 February 2026 05:43:25 +0000 (0:00:00.877) 0:03:02.808 ******* 2026-02-15 05:43:47.565323 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:43:47.565335 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:43:47.565348 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:43:47.565360 | orchestrator | 2026-02-15 05:43:47.565372 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-02-15 05:43:47.565385 | orchestrator | Sunday 15 February 2026 05:43:28 +0000 (0:00:02.544) 0:03:05.352 ******* 2026-02-15 05:43:47.565398 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:43:47.565410 | orchestrator | 2026-02-15 05:43:47.565423 | orchestrator | TASK [mariadb : Run upgrade in MariaDB container] ****************************** 2026-02-15 05:43:47.565435 | orchestrator | Sunday 15 February 2026 05:43:29 +0000 (0:00:01.287) 0:03:06.639 ******* 2026-02-15 05:43:47.565448 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:43:47.565460 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:43:47.565472 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:43:47.565484 | orchestrator | 2026-02-15 05:43:47.565497 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:43:47.565510 | orchestrator | testbed-node-0 : ok=34  changed=8  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-02-15 05:43:47.565539 | orchestrator | testbed-node-1 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-15 05:43:47.565552 | orchestrator | testbed-node-2 : ok=26  changed=6  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-02-15 05:43:47.565565 | orchestrator | 2026-02-15 05:43:47.565577 | orchestrator | 2026-02-15 05:43:47.565588 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:43:47.565599 | orchestrator | Sunday 15 February 2026 05:43:47 +0000 (0:00:17.517) 0:03:24.157 ******* 2026-02-15 05:43:47.565610 | orchestrator | =============================================================================== 2026-02-15 05:43:47.565621 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 76.75s 2026-02-15 05:43:47.565632 | orchestrator | mariadb : Run upgrade in MariaDB container ----------------------------- 17.52s 2026-02-15 05:43:47.565643 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 16.85s 2026-02-15 05:43:47.565654 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 8.96s 2026-02-15 05:43:47.565664 | orchestrator | service-check : mariadb | Get container facts --------------------------- 5.39s 2026-02-15 05:43:47.565675 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.03s 2026-02-15 05:43:47.565686 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.53s 2026-02-15 05:43:47.565697 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.41s 2026-02-15 05:43:47.565708 | orchestrator | service-check-containers : mariadb | Check containers ------------------- 3.37s 2026-02-15 05:43:47.565719 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.33s 2026-02-15 05:43:47.565748 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.32s 2026-02-15 05:43:47.565759 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.74s 2026-02-15 05:43:47.565770 | orchestrator | service-check : mariadb | Fail if containers are missing or not running --- 2.61s 2026-02-15 05:43:47.565781 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.54s 2026-02-15 05:43:47.565792 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.50s 2026-02-15 05:43:47.565803 | orchestrator | mariadb : Restart slave MariaDB container(s) ---------------------------- 2.44s 2026-02-15 05:43:47.565813 | orchestrator | mariadb : Restart master MariaDB container(s) --------------------------- 2.34s 2026-02-15 05:43:47.565824 | orchestrator | mariadb : Check MariaDB service WSREP sync status ----------------------- 2.28s 2026-02-15 05:43:47.565842 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.27s 2026-02-15 05:43:47.565854 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.17s 2026-02-15 05:43:47.873315 | orchestrator | + osism apply -a upgrade rabbitmq 2026-02-15 05:43:49.977085 | orchestrator | 2026-02-15 05:43:49 | INFO  | Task 5768ea67-c323-4b87-8e25-c664c5fda25a (rabbitmq) was prepared for execution. 2026-02-15 05:43:49.977187 | orchestrator | 2026-02-15 05:43:49 | INFO  | It takes a moment until task 5768ea67-c323-4b87-8e25-c664c5fda25a (rabbitmq) has been started and output is visible here. 2026-02-15 05:44:19.766884 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-15 05:44:19.767054 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-15 05:44:19.767130 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-15 05:44:19.767143 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-15 05:44:19.767166 | orchestrator | 2026-02-15 05:44:19.767178 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 05:44:19.767189 | orchestrator | 2026-02-15 05:44:19.767201 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 05:44:19.767212 | orchestrator | Sunday 15 February 2026 05:43:55 +0000 (0:00:01.088) 0:00:01.088 ******* 2026-02-15 05:44:19.767223 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:44:19.767235 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:44:19.767247 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:44:19.767258 | orchestrator | 2026-02-15 05:44:19.767268 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 05:44:19.767279 | orchestrator | Sunday 15 February 2026 05:43:56 +0000 (0:00:00.898) 0:00:01.987 ******* 2026-02-15 05:44:19.767291 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-02-15 05:44:19.767302 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-02-15 05:44:19.767313 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-02-15 05:44:19.767324 | orchestrator | 2026-02-15 05:44:19.767335 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-02-15 05:44:19.767346 | orchestrator | 2026-02-15 05:44:19.767357 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-15 05:44:19.767367 | orchestrator | Sunday 15 February 2026 05:43:57 +0000 (0:00:01.081) 0:00:03.069 ******* 2026-02-15 05:44:19.767378 | orchestrator | included: /ansible/roles/rabbitmq/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:44:19.767390 | orchestrator | 2026-02-15 05:44:19.767418 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-15 05:44:19.767432 | orchestrator | Sunday 15 February 2026 05:43:58 +0000 (0:00:01.081) 0:00:04.150 ******* 2026-02-15 05:44:19.767445 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:44:19.767457 | orchestrator | 2026-02-15 05:44:19.767470 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-02-15 05:44:19.767483 | orchestrator | Sunday 15 February 2026 05:43:59 +0000 (0:00:01.323) 0:00:05.474 ******* 2026-02-15 05:44:19.767496 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:44:19.767508 | orchestrator | 2026-02-15 05:44:19.767521 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-02-15 05:44:19.767533 | orchestrator | Sunday 15 February 2026 05:44:02 +0000 (0:00:02.185) 0:00:07.659 ******* 2026-02-15 05:44:19.767546 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:44:19.767558 | orchestrator | 2026-02-15 05:44:19.767571 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-02-15 05:44:19.767608 | orchestrator | Sunday 15 February 2026 05:44:11 +0000 (0:00:09.226) 0:00:16.886 ******* 2026-02-15 05:44:19.767621 | orchestrator | ok: [testbed-node-0] => { 2026-02-15 05:44:19.767633 | orchestrator |  "changed": false, 2026-02-15 05:44:19.767645 | orchestrator |  "msg": "All assertions passed" 2026-02-15 05:44:19.767659 | orchestrator | } 2026-02-15 05:44:19.767672 | orchestrator | 2026-02-15 05:44:19.767684 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-02-15 05:44:19.767697 | orchestrator | Sunday 15 February 2026 05:44:11 +0000 (0:00:00.336) 0:00:17.222 ******* 2026-02-15 05:44:19.767710 | orchestrator | ok: [testbed-node-0] => { 2026-02-15 05:44:19.767722 | orchestrator |  "changed": false, 2026-02-15 05:44:19.767733 | orchestrator |  "msg": "All assertions passed" 2026-02-15 05:44:19.767744 | orchestrator | } 2026-02-15 05:44:19.767755 | orchestrator | 2026-02-15 05:44:19.767766 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-15 05:44:19.767776 | orchestrator | Sunday 15 February 2026 05:44:12 +0000 (0:00:00.693) 0:00:17.916 ******* 2026-02-15 05:44:19.767787 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:44:19.767798 | orchestrator | 2026-02-15 05:44:19.767809 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-02-15 05:44:19.767820 | orchestrator | Sunday 15 February 2026 05:44:13 +0000 (0:00:01.049) 0:00:18.965 ******* 2026-02-15 05:44:19.767831 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:44:19.767842 | orchestrator | 2026-02-15 05:44:19.767852 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-02-15 05:44:19.767863 | orchestrator | Sunday 15 February 2026 05:44:14 +0000 (0:00:01.219) 0:00:20.185 ******* 2026-02-15 05:44:19.767874 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:44:19.767885 | orchestrator | 2026-02-15 05:44:19.767896 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-02-15 05:44:19.767906 | orchestrator | Sunday 15 February 2026 05:44:16 +0000 (0:00:01.910) 0:00:22.095 ******* 2026-02-15 05:44:19.767917 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:44:19.767928 | orchestrator | 2026-02-15 05:44:19.767939 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-02-15 05:44:19.767950 | orchestrator | Sunday 15 February 2026 05:44:17 +0000 (0:00:01.085) 0:00:23.181 ******* 2026-02-15 05:44:19.767987 | orchestrator | ok: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:44:19.768011 | orchestrator | ok: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:44:19.768034 | orchestrator | ok: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:44:19.768047 | orchestrator | 2026-02-15 05:44:19.768058 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-02-15 05:44:19.768069 | orchestrator | Sunday 15 February 2026 05:44:18 +0000 (0:00:00.817) 0:00:23.998 ******* 2026-02-15 05:44:19.768113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:44:31.262636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:44:31.262795 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:44:31.262814 | orchestrator | 2026-02-15 05:44:31.262828 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-02-15 05:44:31.262841 | orchestrator | Sunday 15 February 2026 05:44:19 +0000 (0:00:01.415) 0:00:25.413 ******* 2026-02-15 05:44:31.262852 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-15 05:44:31.262863 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-15 05:44:31.262873 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-02-15 05:44:31.262884 | orchestrator | 2026-02-15 05:44:31.262895 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-02-15 05:44:31.262906 | orchestrator | Sunday 15 February 2026 05:44:21 +0000 (0:00:01.483) 0:00:26.896 ******* 2026-02-15 05:44:31.262916 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-15 05:44:31.262927 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-15 05:44:31.262937 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-02-15 05:44:31.262948 | orchestrator | 2026-02-15 05:44:31.262958 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-02-15 05:44:31.262969 | orchestrator | Sunday 15 February 2026 05:44:23 +0000 (0:00:01.998) 0:00:28.895 ******* 2026-02-15 05:44:31.262980 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-15 05:44:31.262990 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-15 05:44:31.263001 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-02-15 05:44:31.263012 | orchestrator | 2026-02-15 05:44:31.263022 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-02-15 05:44:31.263033 | orchestrator | Sunday 15 February 2026 05:44:24 +0000 (0:00:01.332) 0:00:30.227 ******* 2026-02-15 05:44:31.263043 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-15 05:44:31.263054 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-15 05:44:31.263064 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-02-15 05:44:31.263075 | orchestrator | 2026-02-15 05:44:31.263086 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-02-15 05:44:31.263185 | orchestrator | Sunday 15 February 2026 05:44:25 +0000 (0:00:01.350) 0:00:31.578 ******* 2026-02-15 05:44:31.263201 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-15 05:44:31.263213 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-15 05:44:31.263237 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-02-15 05:44:31.263250 | orchestrator | 2026-02-15 05:44:31.263263 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-02-15 05:44:31.263276 | orchestrator | Sunday 15 February 2026 05:44:27 +0000 (0:00:01.237) 0:00:32.815 ******* 2026-02-15 05:44:31.263288 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-15 05:44:31.263300 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-15 05:44:31.263312 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-02-15 05:44:31.263324 | orchestrator | 2026-02-15 05:44:31.263336 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-02-15 05:44:31.263348 | orchestrator | Sunday 15 February 2026 05:44:28 +0000 (0:00:01.565) 0:00:34.381 ******* 2026-02-15 05:44:31.263361 | orchestrator | included: /ansible/roles/rabbitmq/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:44:31.263374 | orchestrator | 2026-02-15 05:44:31.263386 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over extra CA certificates] ******* 2026-02-15 05:44:31.263399 | orchestrator | Sunday 15 February 2026 05:44:29 +0000 (0:00:01.014) 0:00:35.396 ******* 2026-02-15 05:44:31.263418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:44:31.263434 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:44:31.263458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:44:36.498419 | orchestrator | 2026-02-15 05:44:36.498518 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS certificate] *** 2026-02-15 05:44:36.498530 | orchestrator | Sunday 15 February 2026 05:44:31 +0000 (0:00:01.507) 0:00:36.903 ******* 2026-02-15 05:44:36.498557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:44:36.498567 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:44:36.498575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:44:36.498581 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:44:36.498587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:44:36.498612 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:44:36.498618 | orchestrator | 2026-02-15 05:44:36.498624 | orchestrator | TASK [service-cert-copy : rabbitmq | Copying over backend internal TLS key] **** 2026-02-15 05:44:36.498630 | orchestrator | Sunday 15 February 2026 05:44:31 +0000 (0:00:00.442) 0:00:37.346 ******* 2026-02-15 05:44:36.498653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:44:36.498668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:44:36.498675 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:44:36.498681 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:44:36.498688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:44:36.498695 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:44:36.498708 | orchestrator | 2026-02-15 05:44:36.498715 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-15 05:44:36.498722 | orchestrator | Sunday 15 February 2026 05:44:32 +0000 (0:00:00.967) 0:00:38.313 ******* 2026-02-15 05:44:36.498729 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:44:36.498736 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:44:36.498743 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:44:36.498750 | orchestrator | 2026-02-15 05:44:36.498757 | orchestrator | TASK [service-check-containers : rabbitmq | Check containers] ****************** 2026-02-15 05:44:36.498763 | orchestrator | Sunday 15 February 2026 05:44:35 +0000 (0:00:02.658) 0:00:40.972 ******* 2026-02-15 05:44:36.498776 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:45:28.420100 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:45:28.420336 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-02-15 05:45:28.420373 | orchestrator | 2026-02-15 05:45:28.420395 | orchestrator | TASK [service-check-containers : rabbitmq | Notify handlers to restart containers] *** 2026-02-15 05:45:28.420448 | orchestrator | Sunday 15 February 2026 05:44:36 +0000 (0:00:01.177) 0:00:42.150 ******* 2026-02-15 05:45:28.420471 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:45:28.420494 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:45:28.420512 | orchestrator | } 2026-02-15 05:45:28.420529 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:45:28.420548 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:45:28.420565 | orchestrator | } 2026-02-15 05:45:28.420584 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:45:28.420601 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:45:28.420617 | orchestrator | } 2026-02-15 05:45:28.420634 | orchestrator | 2026-02-15 05:45:28.420652 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:45:28.420669 | orchestrator | Sunday 15 February 2026 05:44:36 +0000 (0:00:00.383) 0:00:42.534 ******* 2026-02-15 05:45:28.420689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:45:28.420708 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-15 05:45:28.420725 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-15 05:45:28.420798 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:45:28.420820 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:45:28.420837 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:45:28.420854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/rabbitmq:4.1.5.20251208', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-02-15 05:45:28.420886 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:45:28.420985 | orchestrator | 2026-02-15 05:45:28.421004 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-02-15 05:45:28.421020 | orchestrator | Sunday 15 February 2026 05:44:38 +0000 (0:00:01.293) 0:00:43.828 ******* 2026-02-15 05:45:28.421036 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:45:28.421052 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:45:28.421069 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:45:28.421085 | orchestrator | 2026-02-15 05:45:28.421101 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-15 05:45:28.421118 | orchestrator | 2026-02-15 05:45:28.421134 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-15 05:45:28.421150 | orchestrator | Sunday 15 February 2026 05:44:39 +0000 (0:00:01.311) 0:00:45.140 ******* 2026-02-15 05:45:28.421165 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:45:28.421182 | orchestrator | 2026-02-15 05:45:28.421224 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-15 05:45:28.421241 | orchestrator | Sunday 15 February 2026 05:44:40 +0000 (0:00:01.042) 0:00:46.182 ******* 2026-02-15 05:45:28.421257 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:45:28.421274 | orchestrator | 2026-02-15 05:45:28.421289 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-15 05:45:28.421305 | orchestrator | Sunday 15 February 2026 05:44:48 +0000 (0:00:08.273) 0:00:54.455 ******* 2026-02-15 05:45:28.421321 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:45:28.421337 | orchestrator | 2026-02-15 05:45:28.421353 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-15 05:45:28.421369 | orchestrator | Sunday 15 February 2026 05:44:56 +0000 (0:00:08.016) 0:01:02.472 ******* 2026-02-15 05:45:28.421384 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:45:28.421400 | orchestrator | 2026-02-15 05:45:28.421418 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-15 05:45:28.421433 | orchestrator | 2026-02-15 05:45:28.421450 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-15 05:45:28.421467 | orchestrator | Sunday 15 February 2026 05:45:07 +0000 (0:00:10.293) 0:01:12.766 ******* 2026-02-15 05:45:28.421483 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:45:28.421500 | orchestrator | 2026-02-15 05:45:28.421516 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-15 05:45:28.421532 | orchestrator | Sunday 15 February 2026 05:45:08 +0000 (0:00:01.084) 0:01:13.850 ******* 2026-02-15 05:45:28.421548 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:45:28.421564 | orchestrator | 2026-02-15 05:45:28.421580 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-15 05:45:28.421597 | orchestrator | Sunday 15 February 2026 05:45:15 +0000 (0:00:07.642) 0:01:21.493 ******* 2026-02-15 05:45:28.421628 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:46:15.672351 | orchestrator | 2026-02-15 05:46:15.672498 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-15 05:46:15.672529 | orchestrator | Sunday 15 February 2026 05:45:28 +0000 (0:00:12.573) 0:01:34.066 ******* 2026-02-15 05:46:15.672543 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:46:15.672556 | orchestrator | 2026-02-15 05:46:15.672568 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-02-15 05:46:15.672608 | orchestrator | 2026-02-15 05:46:15.672620 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-02-15 05:46:15.672631 | orchestrator | Sunday 15 February 2026 05:45:37 +0000 (0:00:09.425) 0:01:43.492 ******* 2026-02-15 05:46:15.672643 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:46:15.672654 | orchestrator | 2026-02-15 05:46:15.672665 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-02-15 05:46:15.672676 | orchestrator | Sunday 15 February 2026 05:45:39 +0000 (0:00:01.279) 0:01:44.771 ******* 2026-02-15 05:46:15.672687 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:46:15.672698 | orchestrator | 2026-02-15 05:46:15.672709 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-02-15 05:46:15.672720 | orchestrator | Sunday 15 February 2026 05:45:47 +0000 (0:00:08.524) 0:01:53.295 ******* 2026-02-15 05:46:15.672730 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:46:15.672741 | orchestrator | 2026-02-15 05:46:15.672752 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-02-15 05:46:15.672763 | orchestrator | Sunday 15 February 2026 05:46:00 +0000 (0:00:13.258) 0:02:06.554 ******* 2026-02-15 05:46:15.672774 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:46:15.672785 | orchestrator | 2026-02-15 05:46:15.672798 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-02-15 05:46:15.672810 | orchestrator | 2026-02-15 05:46:15.672823 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-02-15 05:46:15.672835 | orchestrator | Sunday 15 February 2026 05:46:10 +0000 (0:00:09.744) 0:02:16.299 ******* 2026-02-15 05:46:15.672848 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:46:15.672860 | orchestrator | 2026-02-15 05:46:15.672871 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-02-15 05:46:15.672883 | orchestrator | Sunday 15 February 2026 05:46:11 +0000 (0:00:00.563) 0:02:16.862 ******* 2026-02-15 05:46:15.672896 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:46:15.672909 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:46:15.672921 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:46:15.672933 | orchestrator | 2026-02-15 05:46:15.672945 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:46:15.672959 | orchestrator | testbed-node-0 : ok=31  changed=11  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 05:46:15.672986 | orchestrator | testbed-node-1 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 05:46:15.672999 | orchestrator | testbed-node-2 : ok=24  changed=10  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-02-15 05:46:15.673012 | orchestrator | 2026-02-15 05:46:15.673023 | orchestrator | 2026-02-15 05:46:15.673036 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:46:15.673049 | orchestrator | Sunday 15 February 2026 05:46:15 +0000 (0:00:04.033) 0:02:20.896 ******* 2026-02-15 05:46:15.673062 | orchestrator | =============================================================================== 2026-02-15 05:46:15.673075 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 33.85s 2026-02-15 05:46:15.673086 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 29.46s 2026-02-15 05:46:15.673098 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode --------------------- 24.44s 2026-02-15 05:46:15.673109 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 9.23s 2026-02-15 05:46:15.673119 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 4.03s 2026-02-15 05:46:15.673130 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 3.41s 2026-02-15 05:46:15.673141 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.66s 2026-02-15 05:46:15.673160 | orchestrator | rabbitmq : Get current RabbitMQ version --------------------------------- 2.19s 2026-02-15 05:46:15.673171 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.00s 2026-02-15 05:46:15.673182 | orchestrator | rabbitmq : List RabbitMQ policies --------------------------------------- 1.91s 2026-02-15 05:46:15.673192 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.57s 2026-02-15 05:46:15.673203 | orchestrator | service-cert-copy : rabbitmq | Copying over extra CA certificates ------- 1.51s 2026-02-15 05:46:15.673214 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.48s 2026-02-15 05:46:15.673224 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 1.42s 2026-02-15 05:46:15.673235 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.35s 2026-02-15 05:46:15.673246 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.33s 2026-02-15 05:46:15.673257 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.32s 2026-02-15 05:46:15.673289 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 1.31s 2026-02-15 05:46:15.673300 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.29s 2026-02-15 05:46:15.673311 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.24s 2026-02-15 05:46:16.000220 | orchestrator | + osism apply -a upgrade openvswitch 2026-02-15 05:46:18.055756 | orchestrator | 2026-02-15 05:46:18 | INFO  | Task 8ff9cdb6-4038-44c6-81ae-eb401871932b (openvswitch) was prepared for execution. 2026-02-15 05:46:18.055864 | orchestrator | 2026-02-15 05:46:18 | INFO  | It takes a moment until task 8ff9cdb6-4038-44c6-81ae-eb401871932b (openvswitch) has been started and output is visible here. 2026-02-15 05:46:35.726473 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_play_start) in callback plugin 2026-02-15 05:46:35.726552 | orchestrator | (): Expecting value: line 2 column 1 (char 1) 2026-02-15 05:46:35.726565 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_task_start) in callback plugin 2026-02-15 05:46:35.726569 | orchestrator | (): 'NoneType' object is not subscriptable 2026-02-15 05:46:35.726578 | orchestrator | 2026-02-15 05:46:35.726582 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 05:46:35.726586 | orchestrator | 2026-02-15 05:46:35.726593 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 05:46:35.726597 | orchestrator | Sunday 15 February 2026 05:46:23 +0000 (0:00:01.556) 0:00:01.556 ******* 2026-02-15 05:46:35.726601 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:46:35.726606 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:46:35.726610 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:46:35.726613 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:46:35.726617 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:46:35.726621 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:46:35.726625 | orchestrator | 2026-02-15 05:46:35.726628 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 05:46:35.726632 | orchestrator | Sunday 15 February 2026 05:46:25 +0000 (0:00:01.306) 0:00:02.862 ******* 2026-02-15 05:46:35.726636 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 05:46:35.726640 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 05:46:35.726644 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 05:46:35.726647 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 05:46:35.726651 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 05:46:35.726670 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-02-15 05:46:35.726674 | orchestrator | 2026-02-15 05:46:35.726678 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-02-15 05:46:35.726682 | orchestrator | 2026-02-15 05:46:35.726686 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-02-15 05:46:35.726689 | orchestrator | Sunday 15 February 2026 05:46:26 +0000 (0:00:01.102) 0:00:03.964 ******* 2026-02-15 05:46:35.726694 | orchestrator | included: /ansible/roles/openvswitch/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 05:46:35.726699 | orchestrator | 2026-02-15 05:46:35.726703 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-02-15 05:46:35.726707 | orchestrator | Sunday 15 February 2026 05:46:28 +0000 (0:00:01.745) 0:00:05.710 ******* 2026-02-15 05:46:35.726711 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-15 05:46:35.726715 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-15 05:46:35.726719 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-15 05:46:35.726722 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-15 05:46:35.726726 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-15 05:46:35.726730 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-15 05:46:35.726733 | orchestrator | 2026-02-15 05:46:35.726737 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-02-15 05:46:35.726741 | orchestrator | Sunday 15 February 2026 05:46:29 +0000 (0:00:01.430) 0:00:07.141 ******* 2026-02-15 05:46:35.726745 | orchestrator | ok: [testbed-node-3] => (item=openvswitch) 2026-02-15 05:46:35.726749 | orchestrator | ok: [testbed-node-1] => (item=openvswitch) 2026-02-15 05:46:35.726752 | orchestrator | ok: [testbed-node-2] => (item=openvswitch) 2026-02-15 05:46:35.726756 | orchestrator | ok: [testbed-node-4] => (item=openvswitch) 2026-02-15 05:46:35.726760 | orchestrator | ok: [testbed-node-0] => (item=openvswitch) 2026-02-15 05:46:35.726763 | orchestrator | ok: [testbed-node-5] => (item=openvswitch) 2026-02-15 05:46:35.726767 | orchestrator | 2026-02-15 05:46:35.726771 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-02-15 05:46:35.726775 | orchestrator | Sunday 15 February 2026 05:46:31 +0000 (0:00:01.572) 0:00:08.713 ******* 2026-02-15 05:46:35.726778 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-02-15 05:46:35.726782 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:46:35.726786 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-02-15 05:46:35.726790 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:46:35.726794 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-02-15 05:46:35.726797 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:46:35.726801 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-02-15 05:46:35.726805 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:46:35.726809 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-02-15 05:46:35.726812 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:46:35.726825 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-02-15 05:46:35.726829 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:46:35.726833 | orchestrator | 2026-02-15 05:46:35.726842 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-02-15 05:46:35.726846 | orchestrator | Sunday 15 February 2026 05:46:32 +0000 (0:00:01.899) 0:00:10.612 ******* 2026-02-15 05:46:35.726850 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:46:35.726854 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:46:35.726857 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:46:35.726861 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:46:35.726865 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:46:35.726879 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:46:35.726883 | orchestrator | 2026-02-15 05:46:35.726887 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-02-15 05:46:35.726894 | orchestrator | Sunday 15 February 2026 05:46:33 +0000 (0:00:01.057) 0:00:11.670 ******* 2026-02-15 05:46:35.726902 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:35.726910 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:35.726914 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:35.726918 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:35.726922 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:35.726931 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:38.068905 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069009 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069025 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069037 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069049 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069106 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069120 | orchestrator | 2026-02-15 05:46:38.069133 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-02-15 05:46:38.069145 | orchestrator | Sunday 15 February 2026 05:46:35 +0000 (0:00:01.739) 0:00:13.410 ******* 2026-02-15 05:46:38.069156 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069169 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069180 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069192 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069211 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:38.069236 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:41.612959 | orchestrator | ok: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:41.613072 | orchestrator | ok: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:41.613089 | orchestrator | ok: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:41.613102 | orchestrator | ok: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:41.613135 | orchestrator | ok: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:41.613181 | orchestrator | ok: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:41.613195 | orchestrator | 2026-02-15 05:46:41.613210 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-02-15 05:46:41.613223 | orchestrator | Sunday 15 February 2026 05:46:38 +0000 (0:00:02.452) 0:00:15.862 ******* 2026-02-15 05:46:41.613235 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:46:41.613247 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:46:41.613258 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:46:41.613270 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:46:41.613281 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:46:41.613292 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:46:41.613332 | orchestrator | 2026-02-15 05:46:41.613344 | orchestrator | TASK [service-check-containers : openvswitch | Check containers] *************** 2026-02-15 05:46:41.613356 | orchestrator | Sunday 15 February 2026 05:46:39 +0000 (0:00:01.363) 0:00:17.226 ******* 2026-02-15 05:46:41.613368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:41.613382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:41.613403 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:41.613420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:41.613496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:42.927457 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-02-15 05:46:42.927558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:42.927599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:42.927612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:42.927638 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:42.927667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:42.927680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-02-15 05:46:42.927692 | orchestrator | 2026-02-15 05:46:42.927705 | orchestrator | TASK [service-check-containers : openvswitch | Notify handlers to restart containers] *** 2026-02-15 05:46:42.927717 | orchestrator | Sunday 15 February 2026 05:46:41 +0000 (0:00:02.179) 0:00:19.405 ******* 2026-02-15 05:46:42.927729 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:46:42.927749 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:46:42.927761 | orchestrator | } 2026-02-15 05:46:42.927772 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:46:42.927783 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:46:42.927794 | orchestrator | } 2026-02-15 05:46:42.927804 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:46:42.927815 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:46:42.927826 | orchestrator | } 2026-02-15 05:46:42.927836 | orchestrator | changed: [testbed-node-3] => { 2026-02-15 05:46:42.927847 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:46:42.927858 | orchestrator | } 2026-02-15 05:46:42.927868 | orchestrator | changed: [testbed-node-4] => { 2026-02-15 05:46:42.927879 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:46:42.927890 | orchestrator | } 2026-02-15 05:46:42.927901 | orchestrator | changed: [testbed-node-5] => { 2026-02-15 05:46:42.927912 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:46:42.927923 | orchestrator | } 2026-02-15 05:46:42.927933 | orchestrator | 2026-02-15 05:46:42.927944 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:46:42.927955 | orchestrator | Sunday 15 February 2026 05:46:42 +0000 (0:00:00.878) 0:00:20.284 ******* 2026-02-15 05:46:42.927967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-15 05:46:42.927984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-15 05:46:42.927996 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:46:42.928007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-15 05:46:42.928026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-15 05:47:07.844829 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:47:07.844943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-15 05:47:07.844959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-15 05:47:07.844971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-15 05:47:07.844982 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:47:07.845008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-15 05:47:07.845019 | orchestrator | [WARNING]: Failure using method (v2_playbook_on_handler_task_start) in callback 2026-02-15 05:47:07.845030 | orchestrator | plugin (): 'NoneType' object is not subscriptable 2026-02-15 05:47:07.845051 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:47:07.845061 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-15 05:47:07.845110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-15 05:47:07.845133 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:47:07.845144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-db-server:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}})  2026-02-15 05:47:07.845154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/2025.1/openvswitch-vswitchd:3.5.1.20251208', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}})  2026-02-15 05:47:07.845165 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:47:07.845174 | orchestrator | 2026-02-15 05:47:07.845185 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 05:47:07.845195 | orchestrator | Sunday 15 February 2026 05:46:44 +0000 (0:00:01.895) 0:00:22.179 ******* 2026-02-15 05:47:07.845205 | orchestrator | 2026-02-15 05:47:07.845214 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 05:47:07.845224 | orchestrator | Sunday 15 February 2026 05:46:44 +0000 (0:00:00.166) 0:00:22.345 ******* 2026-02-15 05:47:07.845234 | orchestrator | 2026-02-15 05:47:07.845249 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 05:47:07.845259 | orchestrator | Sunday 15 February 2026 05:46:44 +0000 (0:00:00.165) 0:00:22.512 ******* 2026-02-15 05:47:07.845269 | orchestrator | 2026-02-15 05:47:07.845278 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 05:47:07.845288 | orchestrator | Sunday 15 February 2026 05:46:44 +0000 (0:00:00.146) 0:00:22.658 ******* 2026-02-15 05:47:07.845297 | orchestrator | 2026-02-15 05:47:07.845307 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 05:47:07.845317 | orchestrator | Sunday 15 February 2026 05:46:45 +0000 (0:00:00.403) 0:00:23.062 ******* 2026-02-15 05:47:07.845388 | orchestrator | 2026-02-15 05:47:07.845401 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-02-15 05:47:07.845412 | orchestrator | Sunday 15 February 2026 05:46:45 +0000 (0:00:00.147) 0:00:23.210 ******* 2026-02-15 05:47:07.845424 | orchestrator | 2026-02-15 05:47:07.845436 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-02-15 05:47:07.845448 | orchestrator | Sunday 15 February 2026 05:46:45 +0000 (0:00:00.164) 0:00:23.374 ******* 2026-02-15 05:47:07.845459 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:47:07.845471 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:47:07.845483 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:47:07.845495 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:47:07.845506 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:47:07.845518 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:47:07.845530 | orchestrator | 2026-02-15 05:47:07.845542 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-02-15 05:47:07.845554 | orchestrator | Sunday 15 February 2026 05:46:56 +0000 (0:00:10.968) 0:00:34.342 ******* 2026-02-15 05:47:07.845566 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:47:07.845578 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:47:07.845590 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:47:07.845601 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:47:07.845612 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:47:07.845623 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:47:07.845635 | orchestrator | 2026-02-15 05:47:07.845647 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-02-15 05:47:07.845658 | orchestrator | Sunday 15 February 2026 05:46:57 +0000 (0:00:01.184) 0:00:35.527 ******* 2026-02-15 05:47:07.845670 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:47:07.845689 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:47:21.294575 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:47:21.294718 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:47:21.294736 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:47:21.294752 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:47:21.294772 | orchestrator | 2026-02-15 05:47:21.294792 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-02-15 05:47:21.294810 | orchestrator | Sunday 15 February 2026 05:47:07 +0000 (0:00:10.000) 0:00:45.527 ******* 2026-02-15 05:47:21.294822 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-02-15 05:47:21.294834 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-02-15 05:47:21.294845 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-02-15 05:47:21.294856 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-02-15 05:47:21.294867 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-02-15 05:47:21.294878 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-02-15 05:47:21.294888 | orchestrator | ok: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-02-15 05:47:21.294899 | orchestrator | ok: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-02-15 05:47:21.294910 | orchestrator | ok: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-02-15 05:47:21.294921 | orchestrator | ok: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-02-15 05:47:21.294932 | orchestrator | ok: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-02-15 05:47:21.294943 | orchestrator | ok: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-02-15 05:47:21.295073 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 05:47:21.295089 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 05:47:21.295100 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 05:47:21.295113 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 05:47:21.295125 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 05:47:21.295138 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-02-15 05:47:21.295151 | orchestrator | 2026-02-15 05:47:21.295164 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-02-15 05:47:21.295191 | orchestrator | Sunday 15 February 2026 05:47:14 +0000 (0:00:06.561) 0:00:52.089 ******* 2026-02-15 05:47:21.295203 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-02-15 05:47:21.295215 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:47:21.295225 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-02-15 05:47:21.295236 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:47:21.295246 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-02-15 05:47:21.295257 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:47:21.295268 | orchestrator | ok: [testbed-node-0] => (item=br-ex) 2026-02-15 05:47:21.295279 | orchestrator | ok: [testbed-node-1] => (item=br-ex) 2026-02-15 05:47:21.295289 | orchestrator | ok: [testbed-node-2] => (item=br-ex) 2026-02-15 05:47:21.295300 | orchestrator | 2026-02-15 05:47:21.295311 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-02-15 05:47:21.295321 | orchestrator | Sunday 15 February 2026 05:47:16 +0000 (0:00:02.164) 0:00:54.254 ******* 2026-02-15 05:47:21.295332 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-02-15 05:47:21.295343 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:47:21.295399 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-02-15 05:47:21.295410 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:47:21.295421 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-02-15 05:47:21.295432 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:47:21.295443 | orchestrator | ok: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-02-15 05:47:21.295454 | orchestrator | ok: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-02-15 05:47:21.295464 | orchestrator | ok: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-02-15 05:47:21.295475 | orchestrator | 2026-02-15 05:47:21.295486 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:47:21.295499 | orchestrator | testbed-node-0 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 05:47:21.295511 | orchestrator | testbed-node-1 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 05:47:21.295542 | orchestrator | testbed-node-2 : ok=15  changed=4  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-02-15 05:47:21.295554 | orchestrator | testbed-node-3 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:47:21.295565 | orchestrator | testbed-node-4 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:47:21.295575 | orchestrator | testbed-node-5 : ok=13  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-02-15 05:47:21.295596 | orchestrator | 2026-02-15 05:47:21.295607 | orchestrator | 2026-02-15 05:47:21.295618 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:47:21.295631 | orchestrator | Sunday 15 February 2026 05:47:20 +0000 (0:00:04.259) 0:00:58.514 ******* 2026-02-15 05:47:21.295650 | orchestrator | =============================================================================== 2026-02-15 05:47:21.295670 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.97s 2026-02-15 05:47:21.295689 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 10.00s 2026-02-15 05:47:21.295710 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 6.56s 2026-02-15 05:47:21.295731 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 4.26s 2026-02-15 05:47:21.295752 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.45s 2026-02-15 05:47:21.295771 | orchestrator | service-check-containers : openvswitch | Check containers --------------- 2.18s 2026-02-15 05:47:21.295783 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.16s 2026-02-15 05:47:21.295794 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.90s 2026-02-15 05:47:21.295804 | orchestrator | service-check-containers : Include tasks -------------------------------- 1.90s 2026-02-15 05:47:21.295815 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.75s 2026-02-15 05:47:21.295825 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.74s 2026-02-15 05:47:21.295836 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.57s 2026-02-15 05:47:21.295847 | orchestrator | module-load : Load modules ---------------------------------------------- 1.43s 2026-02-15 05:47:21.295857 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.36s 2026-02-15 05:47:21.295868 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.31s 2026-02-15 05:47:21.295878 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.19s 2026-02-15 05:47:21.295889 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.18s 2026-02-15 05:47:21.295900 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.10s 2026-02-15 05:47:21.295910 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.06s 2026-02-15 05:47:21.295921 | orchestrator | service-check-containers : openvswitch | Notify handlers to restart containers --- 0.88s 2026-02-15 05:47:21.623936 | orchestrator | + osism apply -a upgrade ovn 2026-02-15 05:47:23.762678 | orchestrator | 2026-02-15 05:47:23 | INFO  | Task cabc73ac-972e-404f-9f59-3ead384e523d (ovn) was prepared for execution. 2026-02-15 05:47:23.762807 | orchestrator | 2026-02-15 05:47:23 | INFO  | It takes a moment until task cabc73ac-972e-404f-9f59-3ead384e523d (ovn) has been started and output is visible here. 2026-02-15 05:47:47.610260 | orchestrator | 2026-02-15 05:47:47.610436 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-02-15 05:47:47.610456 | orchestrator | 2026-02-15 05:47:47.610468 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-02-15 05:47:47.610480 | orchestrator | Sunday 15 February 2026 05:47:29 +0000 (0:00:01.688) 0:00:01.689 ******* 2026-02-15 05:47:47.610491 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:47:47.610503 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:47:47.610514 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:47:47.610525 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:47:47.610536 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:47:47.610547 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:47:47.610557 | orchestrator | 2026-02-15 05:47:47.610568 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-02-15 05:47:47.610579 | orchestrator | Sunday 15 February 2026 05:47:33 +0000 (0:00:03.304) 0:00:04.993 ******* 2026-02-15 05:47:47.610615 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-02-15 05:47:47.610627 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-02-15 05:47:47.610638 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-02-15 05:47:47.610650 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-02-15 05:47:47.610661 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-02-15 05:47:47.610671 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-02-15 05:47:47.610682 | orchestrator | 2026-02-15 05:47:47.610693 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-02-15 05:47:47.610704 | orchestrator | 2026-02-15 05:47:47.610714 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-02-15 05:47:47.610725 | orchestrator | Sunday 15 February 2026 05:47:35 +0000 (0:00:02.617) 0:00:07.610 ******* 2026-02-15 05:47:47.610737 | orchestrator | included: /ansible/roles/ovn-controller/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 05:47:47.610752 | orchestrator | 2026-02-15 05:47:47.610764 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-02-15 05:47:47.610776 | orchestrator | Sunday 15 February 2026 05:47:39 +0000 (0:00:03.925) 0:00:11.536 ******* 2026-02-15 05:47:47.610791 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.610807 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.610820 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.610833 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.610845 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.610891 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.610912 | orchestrator | 2026-02-15 05:47:47.610924 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-02-15 05:47:47.610937 | orchestrator | Sunday 15 February 2026 05:47:42 +0000 (0:00:02.535) 0:00:14.071 ******* 2026-02-15 05:47:47.610950 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.610962 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.610975 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.610988 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.611001 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.611014 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.611027 | orchestrator | 2026-02-15 05:47:47.611038 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-02-15 05:47:47.611048 | orchestrator | Sunday 15 February 2026 05:47:45 +0000 (0:00:02.930) 0:00:17.001 ******* 2026-02-15 05:47:47.611059 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.611076 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:47.611101 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.221867 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222012 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222129 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222143 | orchestrator | 2026-02-15 05:47:55.222157 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-02-15 05:47:55.222169 | orchestrator | Sunday 15 February 2026 05:47:47 +0000 (0:00:02.360) 0:00:19.362 ******* 2026-02-15 05:47:55.222215 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222228 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222239 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222251 | orchestrator | ok: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222301 | orchestrator | ok: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222334 | orchestrator | ok: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222347 | orchestrator | 2026-02-15 05:47:55.222360 | orchestrator | TASK [service-check-containers : ovn_controller | Check containers] ************ 2026-02-15 05:47:55.222373 | orchestrator | Sunday 15 February 2026 05:47:50 +0000 (0:00:03.070) 0:00:22.433 ******* 2026-02-15 05:47:55.222387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222457 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222496 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:47:55.222519 | orchestrator | 2026-02-15 05:47:55.222531 | orchestrator | TASK [service-check-containers : ovn_controller | Notify handlers to restart containers] *** 2026-02-15 05:47:55.222544 | orchestrator | Sunday 15 February 2026 05:47:53 +0000 (0:00:02.511) 0:00:24.944 ******* 2026-02-15 05:47:55.222557 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:47:55.222571 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:47:55.222584 | orchestrator | } 2026-02-15 05:47:55.222596 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:47:55.222607 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:47:55.222618 | orchestrator | } 2026-02-15 05:47:55.222629 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:47:55.222640 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:47:55.222650 | orchestrator | } 2026-02-15 05:47:55.222661 | orchestrator | changed: [testbed-node-3] => { 2026-02-15 05:47:55.222677 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:47:55.222689 | orchestrator | } 2026-02-15 05:47:55.222699 | orchestrator | changed: [testbed-node-4] => { 2026-02-15 05:47:55.222710 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:47:55.222721 | orchestrator | } 2026-02-15 05:47:55.222732 | orchestrator | changed: [testbed-node-5] => { 2026-02-15 05:47:55.222742 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:47:55.222753 | orchestrator | } 2026-02-15 05:47:55.222764 | orchestrator | 2026-02-15 05:47:55.222775 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:47:55.222786 | orchestrator | Sunday 15 February 2026 05:47:55 +0000 (0:00:01.904) 0:00:26.849 ******* 2026-02-15 05:47:55.222808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:48:24.226352 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:48:24.226527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:48:24.226544 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:48:24.226550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:48:24.226556 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:48:24.226561 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:48:24.226566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:48:24.226590 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:48:24.226595 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:48:24.226600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-controller:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:48:24.226605 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:48:24.226610 | orchestrator | 2026-02-15 05:48:24.226615 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-02-15 05:48:24.226621 | orchestrator | Sunday 15 February 2026 05:47:57 +0000 (0:00:02.515) 0:00:29.365 ******* 2026-02-15 05:48:24.226626 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:48:24.226631 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:48:24.226636 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:48:24.226641 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:48:24.226646 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:48:24.226650 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:48:24.226655 | orchestrator | 2026-02-15 05:48:24.226660 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-02-15 05:48:24.226665 | orchestrator | Sunday 15 February 2026 05:48:01 +0000 (0:00:03.689) 0:00:33.055 ******* 2026-02-15 05:48:24.226670 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-02-15 05:48:24.226675 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-02-15 05:48:24.226691 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-02-15 05:48:24.226696 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-02-15 05:48:24.226700 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-02-15 05:48:24.226705 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-02-15 05:48:24.226710 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 05:48:24.226715 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 05:48:24.226719 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 05:48:24.226724 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 05:48:24.226729 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 05:48:24.226746 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-02-15 05:48:24.226751 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-15 05:48:24.226759 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-15 05:48:24.226763 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-15 05:48:24.226768 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-15 05:48:24.226778 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-15 05:48:24.226782 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:16641,tcp:192.168.16.11:16641,tcp:192.168.16.12:16641'}) 2026-02-15 05:48:24.226787 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 05:48:24.226792 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 05:48:24.226807 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 05:48:24.226812 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 05:48:24.226825 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 05:48:24.226830 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-02-15 05:48:24.226834 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 05:48:24.226839 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 05:48:24.226844 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 05:48:24.226849 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 05:48:24.226853 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 05:48:24.226858 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-02-15 05:48:24.226863 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 05:48:24.226868 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 05:48:24.226872 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 05:48:24.226877 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 05:48:24.226882 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 05:48:24.226887 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-02-15 05:48:24.226891 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-15 05:48:24.226896 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-15 05:48:24.226901 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-15 05:48:24.226907 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-02-15 05:48:24.226913 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-15 05:48:24.226930 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-02-15 05:48:24.226937 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-02-15 05:48:24.226947 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-02-15 05:48:24.226952 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-02-15 05:48:24.226958 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-02-15 05:48:24.226975 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-02-15 05:48:24.226984 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-02-15 05:51:12.946161 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-15 05:51:12.946307 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-15 05:51:12.946332 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-15 05:51:12.946345 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-15 05:51:12.946357 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-02-15 05:51:12.946368 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-02-15 05:51:12.946380 | orchestrator | 2026-02-15 05:51:12.946392 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 05:51:12.946404 | orchestrator | Sunday 15 February 2026 05:48:21 +0000 (0:00:19.732) 0:00:52.787 ******* 2026-02-15 05:51:12.946416 | orchestrator | 2026-02-15 05:51:12.946428 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 05:51:12.946439 | orchestrator | Sunday 15 February 2026 05:48:21 +0000 (0:00:00.444) 0:00:53.232 ******* 2026-02-15 05:51:12.946451 | orchestrator | 2026-02-15 05:51:12.946462 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 05:51:12.946473 | orchestrator | Sunday 15 February 2026 05:48:21 +0000 (0:00:00.453) 0:00:53.686 ******* 2026-02-15 05:51:12.946484 | orchestrator | 2026-02-15 05:51:12.946495 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 05:51:12.946507 | orchestrator | Sunday 15 February 2026 05:48:22 +0000 (0:00:00.488) 0:00:54.174 ******* 2026-02-15 05:51:12.946518 | orchestrator | 2026-02-15 05:51:12.946529 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 05:51:12.946540 | orchestrator | Sunday 15 February 2026 05:48:22 +0000 (0:00:00.455) 0:00:54.629 ******* 2026-02-15 05:51:12.946552 | orchestrator | 2026-02-15 05:51:12.946563 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-02-15 05:51:12.946575 | orchestrator | Sunday 15 February 2026 05:48:23 +0000 (0:00:00.485) 0:00:55.115 ******* 2026-02-15 05:51:12.946586 | orchestrator | 2026-02-15 05:51:12.946617 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-02-15 05:51:12.946630 | orchestrator | Sunday 15 February 2026 05:48:24 +0000 (0:00:00.808) 0:00:55.923 ******* 2026-02-15 05:51:12.946641 | orchestrator | 2026-02-15 05:51:12.946652 | orchestrator | STILL ALIVE [task 'ovn-controller : Restart ovn-controller container' is running] *** 2026-02-15 05:51:12.946665 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:51:12.946677 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:51:12.946688 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:51:12.946699 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:51:12.946709 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:51:12.946718 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:51:12.946725 | orchestrator | 2026-02-15 05:51:12.946733 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-02-15 05:51:12.946741 | orchestrator | 2026-02-15 05:51:12.946749 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-15 05:51:12.946757 | orchestrator | Sunday 15 February 2026 05:50:36 +0000 (0:02:11.905) 0:03:07.829 ******* 2026-02-15 05:51:12.946765 | orchestrator | included: /ansible/roles/ovn-db/tasks/upgrade.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:51:12.946810 | orchestrator | 2026-02-15 05:51:12.946819 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-15 05:51:12.946827 | orchestrator | Sunday 15 February 2026 05:50:38 +0000 (0:00:01.940) 0:03:09.770 ******* 2026-02-15 05:51:12.946835 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-02-15 05:51:12.946842 | orchestrator | 2026-02-15 05:51:12.946850 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-02-15 05:51:12.946858 | orchestrator | Sunday 15 February 2026 05:50:39 +0000 (0:00:01.958) 0:03:11.729 ******* 2026-02-15 05:51:12.946866 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.946876 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.946884 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.946891 | orchestrator | 2026-02-15 05:51:12.946899 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-02-15 05:51:12.946918 | orchestrator | Sunday 15 February 2026 05:50:41 +0000 (0:00:01.822) 0:03:13.552 ******* 2026-02-15 05:51:12.946925 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.946932 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.946938 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.946945 | orchestrator | 2026-02-15 05:51:12.946951 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-02-15 05:51:12.946958 | orchestrator | Sunday 15 February 2026 05:50:43 +0000 (0:00:01.392) 0:03:14.944 ******* 2026-02-15 05:51:12.946965 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.946971 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.946978 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.946984 | orchestrator | 2026-02-15 05:51:12.946991 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-02-15 05:51:12.946997 | orchestrator | Sunday 15 February 2026 05:50:44 +0000 (0:00:01.431) 0:03:16.375 ******* 2026-02-15 05:51:12.947004 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.947011 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.947017 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.947024 | orchestrator | 2026-02-15 05:51:12.947030 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-02-15 05:51:12.947037 | orchestrator | Sunday 15 February 2026 05:50:46 +0000 (0:00:01.682) 0:03:18.058 ******* 2026-02-15 05:51:12.947044 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.947076 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.947084 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.947090 | orchestrator | 2026-02-15 05:51:12.947097 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-02-15 05:51:12.947104 | orchestrator | Sunday 15 February 2026 05:50:47 +0000 (0:00:01.425) 0:03:19.484 ******* 2026-02-15 05:51:12.947110 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:51:12.947117 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:51:12.947124 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:51:12.947130 | orchestrator | 2026-02-15 05:51:12.947137 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-02-15 05:51:12.947143 | orchestrator | Sunday 15 February 2026 05:50:49 +0000 (0:00:01.392) 0:03:20.876 ******* 2026-02-15 05:51:12.947150 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.947157 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.947163 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.947170 | orchestrator | 2026-02-15 05:51:12.947176 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-02-15 05:51:12.947183 | orchestrator | Sunday 15 February 2026 05:50:50 +0000 (0:00:01.788) 0:03:22.665 ******* 2026-02-15 05:51:12.947189 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.947196 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.947202 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.947209 | orchestrator | 2026-02-15 05:51:12.947215 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-02-15 05:51:12.947229 | orchestrator | Sunday 15 February 2026 05:50:52 +0000 (0:00:01.725) 0:03:24.390 ******* 2026-02-15 05:51:12.947235 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.947242 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.947248 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.947255 | orchestrator | 2026-02-15 05:51:12.947261 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-02-15 05:51:12.947271 | orchestrator | Sunday 15 February 2026 05:50:54 +0000 (0:00:02.116) 0:03:26.507 ******* 2026-02-15 05:51:12.947283 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.947293 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.947305 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.947316 | orchestrator | 2026-02-15 05:51:12.947327 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-02-15 05:51:12.947338 | orchestrator | Sunday 15 February 2026 05:50:56 +0000 (0:00:01.502) 0:03:28.009 ******* 2026-02-15 05:51:12.947349 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:51:12.947356 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:51:12.947362 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:51:12.947369 | orchestrator | 2026-02-15 05:51:12.947375 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-02-15 05:51:12.947382 | orchestrator | Sunday 15 February 2026 05:50:57 +0000 (0:00:01.421) 0:03:29.430 ******* 2026-02-15 05:51:12.947388 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:51:12.947395 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:51:12.947402 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:51:12.947408 | orchestrator | 2026-02-15 05:51:12.947415 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-02-15 05:51:12.947421 | orchestrator | Sunday 15 February 2026 05:50:59 +0000 (0:00:01.382) 0:03:30.813 ******* 2026-02-15 05:51:12.947428 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.947434 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.947441 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.947447 | orchestrator | 2026-02-15 05:51:12.947454 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-02-15 05:51:12.947460 | orchestrator | Sunday 15 February 2026 05:51:00 +0000 (0:00:01.836) 0:03:32.649 ******* 2026-02-15 05:51:12.947467 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.947473 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.947480 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.947486 | orchestrator | 2026-02-15 05:51:12.947493 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-02-15 05:51:12.947500 | orchestrator | Sunday 15 February 2026 05:51:02 +0000 (0:00:01.403) 0:03:34.052 ******* 2026-02-15 05:51:12.947506 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.947513 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.947519 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.947526 | orchestrator | 2026-02-15 05:51:12.947532 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-02-15 05:51:12.947539 | orchestrator | Sunday 15 February 2026 05:51:04 +0000 (0:00:02.066) 0:03:36.119 ******* 2026-02-15 05:51:12.947545 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:51:12.947552 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:51:12.947558 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:51:12.947565 | orchestrator | 2026-02-15 05:51:12.947571 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-02-15 05:51:12.947578 | orchestrator | Sunday 15 February 2026 05:51:05 +0000 (0:00:01.373) 0:03:37.492 ******* 2026-02-15 05:51:12.947584 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:51:12.947591 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:51:12.947627 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:51:12.947635 | orchestrator | 2026-02-15 05:51:12.947642 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-02-15 05:51:12.947648 | orchestrator | Sunday 15 February 2026 05:51:07 +0000 (0:00:01.380) 0:03:38.872 ******* 2026-02-15 05:51:12.947655 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:51:12.947667 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:51:12.947674 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:51:12.947680 | orchestrator | 2026-02-15 05:51:12.947687 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-02-15 05:51:12.947693 | orchestrator | Sunday 15 February 2026 05:51:08 +0000 (0:00:01.744) 0:03:40.617 ******* 2026-02-15 05:51:12.947709 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.284847 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.284974 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.284992 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.285004 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.285014 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.285040 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.285074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:19.285104 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.285115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:19.285125 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.285135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:19.285145 | orchestrator | 2026-02-15 05:51:19.285157 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-02-15 05:51:19.285168 | orchestrator | Sunday 15 February 2026 05:51:12 +0000 (0:00:04.082) 0:03:44.699 ******* 2026-02-15 05:51:19.285178 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.285189 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.285211 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.285222 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:19.285238 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.126562 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.126768 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.126789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:34.126802 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.126851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:34.126880 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.126892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:34.126903 | orchestrator | 2026-02-15 05:51:34.126916 | orchestrator | TASK [ovn-db : Ensure configuration for relays exists] ************************* 2026-02-15 05:51:34.126929 | orchestrator | Sunday 15 February 2026 05:51:19 +0000 (0:00:06.342) 0:03:51.042 ******* 2026-02-15 05:51:34.126954 | orchestrator | included: /ansible/roles/ovn-db/tasks/config-relay.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=1) 2026-02-15 05:51:34.126965 | orchestrator | 2026-02-15 05:51:34.126976 | orchestrator | TASK [ovn-db : Ensuring config directories exist for OVN relay containers] ***** 2026-02-15 05:51:34.126987 | orchestrator | Sunday 15 February 2026 05:51:21 +0000 (0:00:01.893) 0:03:52.936 ******* 2026-02-15 05:51:34.126998 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:51:34.127010 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:51:34.127035 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:51:34.127047 | orchestrator | 2026-02-15 05:51:34.127058 | orchestrator | TASK [ovn-db : Copying over config.json files for OVN relay services] ********** 2026-02-15 05:51:34.127069 | orchestrator | Sunday 15 February 2026 05:51:22 +0000 (0:00:01.723) 0:03:54.660 ******* 2026-02-15 05:51:34.127082 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:51:34.127095 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:51:34.127107 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:51:34.127119 | orchestrator | 2026-02-15 05:51:34.127131 | orchestrator | TASK [ovn-db : Generate config files for OVN relay services] ******************* 2026-02-15 05:51:34.127144 | orchestrator | Sunday 15 February 2026 05:51:25 +0000 (0:00:02.661) 0:03:57.322 ******* 2026-02-15 05:51:34.127157 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:51:34.127170 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:51:34.127182 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:51:34.127195 | orchestrator | 2026-02-15 05:51:34.127207 | orchestrator | TASK [service-check-containers : ovn_db | Check containers] ******************** 2026-02-15 05:51:34.127219 | orchestrator | Sunday 15 February 2026 05:51:28 +0000 (0:00:02.932) 0:04:00.255 ******* 2026-02-15 05:51:34.127256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.127291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.127314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.127352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.127366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.127389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:34.127421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:38.661405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:38.661522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:51:38.661569 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661578 | orchestrator | 2026-02-15 05:51:38.661587 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-15 05:51:38.661596 | orchestrator | Sunday 15 February 2026 05:51:34 +0000 (0:00:05.615) 0:04:05.870 ******* 2026-02-15 05:51:38.661605 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:51:38.661614 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:51:38.661680 | orchestrator | } 2026-02-15 05:51:38.661695 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:51:38.661708 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:51:38.661721 | orchestrator | } 2026-02-15 05:51:38.661734 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:51:38.661747 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:51:38.661760 | orchestrator | } 2026-02-15 05:51:38.661774 | orchestrator | 2026-02-15 05:51:38.661789 | orchestrator | TASK [service-check-containers : Include tasks] ******************************** 2026-02-15 05:51:38.661803 | orchestrator | Sunday 15 February 2026 05:51:35 +0000 (0:00:01.405) 0:04:07.276 ******* 2026-02-15 05:51:38.661818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661930 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641', 'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-northd:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'environment': {'OVN_NB_DB': 'tcp:192.168.16.10:6641,tcp:192.168.16.11:6641,tcp:192.168.16.12:6641'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-nb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.661989 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'environment': {'OVN_SB_DB': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-server:25.3.1.20251208', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-02-15 05:51:38.662129 | orchestrator | included: /ansible/roles/service-check-containers/tasks/iterated.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'ovn-sb-db-relay', 'value': {'container_name': 'ovn_sb_db_relay', 'group': 'ovn-sb-db-relay', 'enabled': True, 'environment': {'RELAY_ID': '1'}, 'image': 'registry.osism.tech/kolla/release/2025.1/ovn-sb-db-relay:25.3.1.20251208', 'iterate': True, 'iterate_var': '1', 'volumes': ['/etc/kolla/ovn-sb-db-relay/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-02-15 05:53:12.476792 | orchestrator | 2026-02-15 05:53:12.476914 | orchestrator | TASK [service-check-containers : ovn_db | Check containers with iteration] ***** 2026-02-15 05:53:12.476932 | orchestrator | Sunday 15 February 2026 05:51:38 +0000 (0:00:03.133) 0:04:10.409 ******* 2026-02-15 05:53:12.476945 | orchestrator | changed: [testbed-node-0] => (item=[1]) 2026-02-15 05:53:12.476957 | orchestrator | changed: [testbed-node-1] => (item=[1]) 2026-02-15 05:53:12.476968 | orchestrator | changed: [testbed-node-2] => (item=[1]) 2026-02-15 05:53:12.476980 | orchestrator | 2026-02-15 05:53:12.476991 | orchestrator | TASK [service-check-containers : ovn_db | Notify handlers to restart containers] *** 2026-02-15 05:53:12.477003 | orchestrator | Sunday 15 February 2026 05:51:40 +0000 (0:00:02.239) 0:04:12.648 ******* 2026-02-15 05:53:12.477014 | orchestrator | changed: [testbed-node-0] => { 2026-02-15 05:53:12.477026 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:53:12.477038 | orchestrator | } 2026-02-15 05:53:12.477049 | orchestrator | changed: [testbed-node-1] => { 2026-02-15 05:53:12.477060 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:53:12.477071 | orchestrator | } 2026-02-15 05:53:12.477082 | orchestrator | changed: [testbed-node-2] => { 2026-02-15 05:53:12.477093 | orchestrator |  "msg": "Notifying handlers" 2026-02-15 05:53:12.477103 | orchestrator | } 2026-02-15 05:53:12.477115 | orchestrator | 2026-02-15 05:53:12.477125 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-15 05:53:12.477136 | orchestrator | Sunday 15 February 2026 05:51:42 +0000 (0:00:01.386) 0:04:14.035 ******* 2026-02-15 05:53:12.477147 | orchestrator | 2026-02-15 05:53:12.477158 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-15 05:53:12.477169 | orchestrator | Sunday 15 February 2026 05:51:42 +0000 (0:00:00.449) 0:04:14.484 ******* 2026-02-15 05:53:12.477180 | orchestrator | 2026-02-15 05:53:12.477190 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-02-15 05:53:12.477201 | orchestrator | Sunday 15 February 2026 05:51:43 +0000 (0:00:00.464) 0:04:14.949 ******* 2026-02-15 05:53:12.477212 | orchestrator | 2026-02-15 05:53:12.477223 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-02-15 05:53:12.477233 | orchestrator | Sunday 15 February 2026 05:51:44 +0000 (0:00:01.032) 0:04:15.981 ******* 2026-02-15 05:53:12.477244 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:53:12.477255 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:53:12.477268 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:53:12.477281 | orchestrator | 2026-02-15 05:53:12.477312 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-02-15 05:53:12.477325 | orchestrator | Sunday 15 February 2026 05:52:01 +0000 (0:00:17.112) 0:04:33.094 ******* 2026-02-15 05:53:12.477337 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:53:12.477350 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:53:12.477362 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:53:12.477374 | orchestrator | 2026-02-15 05:53:12.477386 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db-relay container] ******************* 2026-02-15 05:53:12.477398 | orchestrator | Sunday 15 February 2026 05:52:18 +0000 (0:00:16.726) 0:04:49.821 ******* 2026-02-15 05:53:12.477435 | orchestrator | changed: [testbed-node-0] => (item=1) 2026-02-15 05:53:12.477447 | orchestrator | changed: [testbed-node-1] => (item=1) 2026-02-15 05:53:12.477460 | orchestrator | changed: [testbed-node-2] => (item=1) 2026-02-15 05:53:12.477472 | orchestrator | 2026-02-15 05:53:12.477484 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-02-15 05:53:12.477496 | orchestrator | Sunday 15 February 2026 05:52:34 +0000 (0:00:16.255) 0:05:06.077 ******* 2026-02-15 05:53:12.477508 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:53:12.477520 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:53:12.477532 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:53:12.477544 | orchestrator | 2026-02-15 05:53:12.477557 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-02-15 05:53:12.477570 | orchestrator | Sunday 15 February 2026 05:52:51 +0000 (0:00:17.220) 0:05:23.298 ******* 2026-02-15 05:53:12.477583 | orchestrator | Pausing for 5 seconds 2026-02-15 05:53:12.477595 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:53:12.477608 | orchestrator | 2026-02-15 05:53:12.477620 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-02-15 05:53:12.477633 | orchestrator | Sunday 15 February 2026 05:52:57 +0000 (0:00:06.167) 0:05:29.466 ******* 2026-02-15 05:53:12.477644 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:53:12.477655 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:53:12.477665 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:53:12.477676 | orchestrator | 2026-02-15 05:53:12.477686 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-02-15 05:53:12.477697 | orchestrator | Sunday 15 February 2026 05:52:59 +0000 (0:00:01.915) 0:05:31.381 ******* 2026-02-15 05:53:12.477730 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:53:12.477741 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:53:12.477751 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:53:12.477762 | orchestrator | 2026-02-15 05:53:12.477773 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-02-15 05:53:12.477783 | orchestrator | Sunday 15 February 2026 05:53:01 +0000 (0:00:01.810) 0:05:33.192 ******* 2026-02-15 05:53:12.477794 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:53:12.477804 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:53:12.477815 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:53:12.477825 | orchestrator | 2026-02-15 05:53:12.477836 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-02-15 05:53:12.477847 | orchestrator | Sunday 15 February 2026 05:53:03 +0000 (0:00:02.000) 0:05:35.192 ******* 2026-02-15 05:53:12.477857 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:53:12.477868 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:53:12.477879 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:53:12.477889 | orchestrator | 2026-02-15 05:53:12.477900 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-02-15 05:53:12.477911 | orchestrator | Sunday 15 February 2026 05:53:05 +0000 (0:00:01.703) 0:05:36.896 ******* 2026-02-15 05:53:12.477921 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:53:12.477932 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:53:12.477943 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:53:12.477953 | orchestrator | 2026-02-15 05:53:12.477964 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-02-15 05:53:12.477994 | orchestrator | Sunday 15 February 2026 05:53:07 +0000 (0:00:01.995) 0:05:38.891 ******* 2026-02-15 05:53:12.478006 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:53:12.478077 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:53:12.478089 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:53:12.478099 | orchestrator | 2026-02-15 05:53:12.478110 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db-relay] *************************************** 2026-02-15 05:53:12.478121 | orchestrator | Sunday 15 February 2026 05:53:08 +0000 (0:00:01.869) 0:05:40.760 ******* 2026-02-15 05:53:12.478132 | orchestrator | ok: [testbed-node-0] => (item=1) 2026-02-15 05:53:12.478142 | orchestrator | ok: [testbed-node-1] => (item=1) 2026-02-15 05:53:12.478163 | orchestrator | ok: [testbed-node-2] => (item=1) 2026-02-15 05:53:12.478174 | orchestrator | 2026-02-15 05:53:12.478184 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 05:53:12.478197 | orchestrator | testbed-node-0 : ok=49  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 05:53:12.478209 | orchestrator | testbed-node-1 : ok=48  changed=16  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-02-15 05:53:12.478220 | orchestrator | testbed-node-2 : ok=47  changed=15  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-02-15 05:53:12.478231 | orchestrator | testbed-node-3 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 05:53:12.478241 | orchestrator | testbed-node-4 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 05:53:12.478252 | orchestrator | testbed-node-5 : ok=12  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 05:53:12.478263 | orchestrator | 2026-02-15 05:53:12.478274 | orchestrator | 2026-02-15 05:53:12.478284 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 05:53:12.478301 | orchestrator | Sunday 15 February 2026 05:53:12 +0000 (0:00:03.077) 0:05:43.838 ******* 2026-02-15 05:53:12.478312 | orchestrator | =============================================================================== 2026-02-15 05:53:12.478323 | orchestrator | ovn-controller : Restart ovn-controller container --------------------- 131.91s 2026-02-15 05:53:12.478334 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.73s 2026-02-15 05:53:12.478344 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 17.22s 2026-02-15 05:53:12.478355 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 17.11s 2026-02-15 05:53:12.478366 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 16.73s 2026-02-15 05:53:12.478376 | orchestrator | ovn-db : Restart ovn-sb-db-relay container ----------------------------- 16.26s 2026-02-15 05:53:12.478387 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 6.34s 2026-02-15 05:53:12.478397 | orchestrator | ovn-db : Wait for leader election --------------------------------------- 6.17s 2026-02-15 05:53:12.478408 | orchestrator | service-check-containers : ovn_db | Check containers -------------------- 5.62s 2026-02-15 05:53:12.478418 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 4.08s 2026-02-15 05:53:12.478429 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 3.93s 2026-02-15 05:53:12.478439 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.69s 2026-02-15 05:53:12.478450 | orchestrator | Group hosts based on Kolla action --------------------------------------- 3.30s 2026-02-15 05:53:12.478460 | orchestrator | ovn-controller : Flush handlers ----------------------------------------- 3.14s 2026-02-15 05:53:12.478471 | orchestrator | service-check-containers : Include tasks -------------------------------- 3.13s 2026-02-15 05:53:12.478481 | orchestrator | ovn-db : Wait for ovn-sb-db-relay --------------------------------------- 3.08s 2026-02-15 05:53:12.478492 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 3.07s 2026-02-15 05:53:12.478502 | orchestrator | ovn-db : Generate config files for OVN relay services ------------------- 2.93s 2026-02-15 05:53:12.478513 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.93s 2026-02-15 05:53:12.478547 | orchestrator | ovn-db : Copying over config.json files for OVN relay services ---------- 2.66s 2026-02-15 05:53:12.805840 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-15 05:53:12.805937 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-02-15 05:53:12.805978 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh 2026-02-15 05:53:12.817403 | orchestrator | + set -e 2026-02-15 05:53:12.817496 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 05:53:12.817519 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 05:53:12.817538 | orchestrator | ++ INTERACTIVE=false 2026-02-15 05:53:12.817556 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 05:53:12.817575 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 05:53:12.817595 | orchestrator | + osism apply ceph-rolling_update -e ireallymeanit=yes 2026-02-15 05:53:14.987173 | orchestrator | 2026-02-15 05:53:14 | INFO  | Task 5d31f106-898c-41ed-a7cc-427455869a01 (ceph-rolling_update) was prepared for execution. 2026-02-15 05:53:14.987274 | orchestrator | 2026-02-15 05:53:14 | INFO  | It takes a moment until task 5d31f106-898c-41ed-a7cc-427455869a01 (ceph-rolling_update) has been started and output is visible here. 2026-02-15 05:54:39.315736 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-02-15 05:54:39.315939 | orchestrator | 2.16.14 2026-02-15 05:54:39.315967 | orchestrator | 2026-02-15 05:54:39.315987 | orchestrator | PLAY [Confirm whether user really meant to upgrade the cluster] **************** 2026-02-15 05:54:39.316007 | orchestrator | 2026-02-15 05:54:39.316024 | orchestrator | TASK [Exit playbook, if user did not mean to upgrade cluster] ****************** 2026-02-15 05:54:39.316042 | orchestrator | Sunday 15 February 2026 05:53:23 +0000 (0:00:01.818) 0:00:01.818 ******* 2026-02-15 05:54:39.316060 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: rbdmirrors 2026-02-15 05:54:39.316078 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: nfss 2026-02-15 05:54:39.316096 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: clients 2026-02-15 05:54:39.316115 | orchestrator | skipping: [localhost] 2026-02-15 05:54:39.316134 | orchestrator | 2026-02-15 05:54:39.316154 | orchestrator | PLAY [Gather facts and check the init system] ********************************** 2026-02-15 05:54:39.316174 | orchestrator | 2026-02-15 05:54:39.316188 | orchestrator | TASK [Gather facts on all Ceph hosts for following reference] ****************** 2026-02-15 05:54:39.316207 | orchestrator | Sunday 15 February 2026 05:53:25 +0000 (0:00:01.993) 0:00:03.812 ******* 2026-02-15 05:54:39.316225 | orchestrator | ok: [testbed-node-0] => { 2026-02-15 05:54:39.316244 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-15 05:54:39.316263 | orchestrator | } 2026-02-15 05:54:39.316283 | orchestrator | ok: [testbed-node-1] => { 2026-02-15 05:54:39.316303 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-15 05:54:39.316324 | orchestrator | } 2026-02-15 05:54:39.316344 | orchestrator | ok: [testbed-node-2] => { 2026-02-15 05:54:39.316359 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-15 05:54:39.316372 | orchestrator | } 2026-02-15 05:54:39.316384 | orchestrator | ok: [testbed-node-3] => { 2026-02-15 05:54:39.316397 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-15 05:54:39.316410 | orchestrator | } 2026-02-15 05:54:39.316421 | orchestrator | ok: [testbed-node-4] => { 2026-02-15 05:54:39.316434 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-15 05:54:39.316446 | orchestrator | } 2026-02-15 05:54:39.316458 | orchestrator | ok: [testbed-node-5] => { 2026-02-15 05:54:39.316470 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-15 05:54:39.316485 | orchestrator | } 2026-02-15 05:54:39.316505 | orchestrator | ok: [testbed-manager] => { 2026-02-15 05:54:39.316544 | orchestrator |  "msg": "gather facts on all Ceph hosts for following reference" 2026-02-15 05:54:39.316563 | orchestrator | } 2026-02-15 05:54:39.316582 | orchestrator | 2026-02-15 05:54:39.316600 | orchestrator | TASK [Gather facts] ************************************************************ 2026-02-15 05:54:39.316620 | orchestrator | Sunday 15 February 2026 05:53:30 +0000 (0:00:05.190) 0:00:09.003 ******* 2026-02-15 05:54:39.316640 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:54:39.316658 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:54:39.316709 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:54:39.316727 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:54:39.316744 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:54:39.316763 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:54:39.316810 | orchestrator | ok: [testbed-manager] 2026-02-15 05:54:39.316829 | orchestrator | 2026-02-15 05:54:39.316849 | orchestrator | TASK [Gather and delegate facts] *********************************************** 2026-02-15 05:54:39.316868 | orchestrator | Sunday 15 February 2026 05:53:36 +0000 (0:00:05.927) 0:00:14.930 ******* 2026-02-15 05:54:39.316886 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 05:54:39.316939 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:54:39.316959 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 05:54:39.316976 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 05:54:39.316995 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 05:54:39.317013 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 05:54:39.317031 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 05:54:39.317050 | orchestrator | 2026-02-15 05:54:39.317068 | orchestrator | TASK [Set_fact rolling_update] ************************************************* 2026-02-15 05:54:39.317083 | orchestrator | Sunday 15 February 2026 05:54:07 +0000 (0:00:30.883) 0:00:45.814 ******* 2026-02-15 05:54:39.317094 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:54:39.317104 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:54:39.317115 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:54:39.317126 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:54:39.317136 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:54:39.317147 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:54:39.317158 | orchestrator | ok: [testbed-manager] 2026-02-15 05:54:39.317168 | orchestrator | 2026-02-15 05:54:39.317179 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 05:54:39.317190 | orchestrator | Sunday 15 February 2026 05:54:09 +0000 (0:00:02.184) 0:00:47.999 ******* 2026-02-15 05:54:39.317201 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-15 05:54:39.317213 | orchestrator | 2026-02-15 05:54:39.317224 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 05:54:39.317235 | orchestrator | Sunday 15 February 2026 05:54:12 +0000 (0:00:02.801) 0:00:50.801 ******* 2026-02-15 05:54:39.317246 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:54:39.317256 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:54:39.317267 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:54:39.317277 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:54:39.317287 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:54:39.317298 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:54:39.317308 | orchestrator | ok: [testbed-manager] 2026-02-15 05:54:39.317320 | orchestrator | 2026-02-15 05:54:39.317354 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 05:54:39.317368 | orchestrator | Sunday 15 February 2026 05:54:15 +0000 (0:00:02.583) 0:00:53.385 ******* 2026-02-15 05:54:39.317386 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:54:39.317403 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:54:39.317420 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:54:39.317435 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:54:39.317451 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:54:39.317468 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:54:39.317487 | orchestrator | ok: [testbed-manager] 2026-02-15 05:54:39.317505 | orchestrator | 2026-02-15 05:54:39.317524 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 05:54:39.317536 | orchestrator | Sunday 15 February 2026 05:54:17 +0000 (0:00:02.077) 0:00:55.462 ******* 2026-02-15 05:54:39.317558 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:54:39.317569 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:54:39.317579 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:54:39.317590 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:54:39.317600 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:54:39.317611 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:54:39.317621 | orchestrator | ok: [testbed-manager] 2026-02-15 05:54:39.317637 | orchestrator | 2026-02-15 05:54:39.317655 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 05:54:39.317674 | orchestrator | Sunday 15 February 2026 05:54:19 +0000 (0:00:02.604) 0:00:58.067 ******* 2026-02-15 05:54:39.317692 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:54:39.317710 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:54:39.317727 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:54:39.317744 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:54:39.317761 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:54:39.317807 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:54:39.317826 | orchestrator | ok: [testbed-manager] 2026-02-15 05:54:39.317843 | orchestrator | 2026-02-15 05:54:39.317861 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 05:54:39.317880 | orchestrator | Sunday 15 February 2026 05:54:21 +0000 (0:00:01.912) 0:00:59.979 ******* 2026-02-15 05:54:39.317898 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:54:39.317917 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:54:39.317936 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:54:39.317954 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:54:39.317971 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:54:39.317982 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:54:39.317993 | orchestrator | ok: [testbed-manager] 2026-02-15 05:54:39.318003 | orchestrator | 2026-02-15 05:54:39.318092 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 05:54:39.318129 | orchestrator | Sunday 15 February 2026 05:54:24 +0000 (0:00:02.229) 0:01:02.208 ******* 2026-02-15 05:54:39.318148 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:54:39.318169 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:54:39.318189 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:54:39.318209 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:54:39.318229 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:54:39.318250 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:54:39.318270 | orchestrator | ok: [testbed-manager] 2026-02-15 05:54:39.318290 | orchestrator | 2026-02-15 05:54:39.318310 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 05:54:39.318330 | orchestrator | Sunday 15 February 2026 05:54:26 +0000 (0:00:02.079) 0:01:04.288 ******* 2026-02-15 05:54:39.318350 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:54:39.318371 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:54:39.318392 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:54:39.318413 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:54:39.318433 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:54:39.318454 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:54:39.318474 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:54:39.318495 | orchestrator | 2026-02-15 05:54:39.318514 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 05:54:39.318535 | orchestrator | Sunday 15 February 2026 05:54:28 +0000 (0:00:02.185) 0:01:06.474 ******* 2026-02-15 05:54:39.318555 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:54:39.318575 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:54:39.318595 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:54:39.318615 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:54:39.318635 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:54:39.318655 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:54:39.318675 | orchestrator | ok: [testbed-manager] 2026-02-15 05:54:39.318694 | orchestrator | 2026-02-15 05:54:39.318714 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 05:54:39.318735 | orchestrator | Sunday 15 February 2026 05:54:30 +0000 (0:00:02.193) 0:01:08.667 ******* 2026-02-15 05:54:39.318831 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:54:39.318856 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 05:54:39.318876 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 05:54:39.318896 | orchestrator | 2026-02-15 05:54:39.318916 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 05:54:39.318936 | orchestrator | Sunday 15 February 2026 05:54:32 +0000 (0:00:01.713) 0:01:10.380 ******* 2026-02-15 05:54:39.318957 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:54:39.318977 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:54:39.318997 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:54:39.319018 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:54:39.319038 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:54:39.319057 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:54:39.319075 | orchestrator | ok: [testbed-manager] 2026-02-15 05:54:39.319094 | orchestrator | 2026-02-15 05:54:39.319114 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 05:54:39.319132 | orchestrator | Sunday 15 February 2026 05:54:34 +0000 (0:00:02.249) 0:01:12.630 ******* 2026-02-15 05:54:39.319149 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:54:39.319168 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 05:54:39.319187 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 05:54:39.319206 | orchestrator | 2026-02-15 05:54:39.319224 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 05:54:39.319242 | orchestrator | Sunday 15 February 2026 05:54:37 +0000 (0:00:03.250) 0:01:15.880 ******* 2026-02-15 05:54:39.319277 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 05:55:01.917903 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 05:55:01.918065 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 05:55:01.918081 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:01.918092 | orchestrator | 2026-02-15 05:55:01.918101 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 05:55:01.918110 | orchestrator | Sunday 15 February 2026 05:54:39 +0000 (0:00:01.523) 0:01:17.404 ******* 2026-02-15 05:55:01.918120 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 05:55:01.918143 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 05:55:01.918151 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 05:55:01.918160 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:01.918167 | orchestrator | 2026-02-15 05:55:01.918175 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 05:55:01.918183 | orchestrator | Sunday 15 February 2026 05:54:41 +0000 (0:00:01.874) 0:01:19.278 ******* 2026-02-15 05:55:01.918207 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:01.918236 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:01.918245 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:01.918253 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:01.918261 | orchestrator | 2026-02-15 05:55:01.918269 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 05:55:01.918277 | orchestrator | Sunday 15 February 2026 05:54:42 +0000 (0:00:01.147) 0:01:20.425 ******* 2026-02-15 05:55:01.918287 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e40f30e87190', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 05:54:35.175494', 'end': '2026-02-15 05:54:35.230842', 'delta': '0:00:00.055348', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e40f30e87190'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 05:55:01.918334 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '3aeb4857506c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 05:54:36.007450', 'end': '2026-02-15 05:54:36.060605', 'delta': '0:00:00.053155', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3aeb4857506c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 05:55:01.918345 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9cffadff9441', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 05:54:36.543428', 'end': '2026-02-15 05:54:36.591784', 'delta': '0:00:00.048356', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9cffadff9441'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 05:55:01.918353 | orchestrator | 2026-02-15 05:55:01.918362 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 05:55:01.918372 | orchestrator | Sunday 15 February 2026 05:54:43 +0000 (0:00:01.238) 0:01:21.664 ******* 2026-02-15 05:55:01.918381 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:55:01.918391 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:55:01.918401 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:55:01.918410 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:55:01.918419 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:55:01.918435 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:55:01.918444 | orchestrator | ok: [testbed-manager] 2026-02-15 05:55:01.918453 | orchestrator | 2026-02-15 05:55:01.918462 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 05:55:01.918471 | orchestrator | Sunday 15 February 2026 05:54:45 +0000 (0:00:02.350) 0:01:24.015 ******* 2026-02-15 05:55:01.918480 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:01.918490 | orchestrator | 2026-02-15 05:55:01.918499 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 05:55:01.918508 | orchestrator | Sunday 15 February 2026 05:54:47 +0000 (0:00:01.302) 0:01:25.317 ******* 2026-02-15 05:55:01.918517 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:55:01.918526 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:55:01.918536 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:55:01.918545 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:55:01.918553 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:55:01.918563 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:55:01.918572 | orchestrator | ok: [testbed-manager] 2026-02-15 05:55:01.918581 | orchestrator | 2026-02-15 05:55:01.918591 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 05:55:01.918600 | orchestrator | Sunday 15 February 2026 05:54:49 +0000 (0:00:02.168) 0:01:27.486 ******* 2026-02-15 05:55:01.918609 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:55:01.918619 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-15 05:55:01.918629 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-15 05:55:01.918638 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 05:55:01.918647 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-15 05:55:01.918656 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-15 05:55:01.918666 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-02-15 05:55:01.918674 | orchestrator | 2026-02-15 05:55:01.918682 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 05:55:01.918690 | orchestrator | Sunday 15 February 2026 05:54:52 +0000 (0:00:03.344) 0:01:30.831 ******* 2026-02-15 05:55:01.918725 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:55:01.918734 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:55:01.918742 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:55:01.918749 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:55:01.918757 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:55:01.918765 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:55:01.918772 | orchestrator | ok: [testbed-manager] 2026-02-15 05:55:01.918780 | orchestrator | 2026-02-15 05:55:01.918805 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 05:55:01.918813 | orchestrator | Sunday 15 February 2026 05:54:54 +0000 (0:00:02.265) 0:01:33.096 ******* 2026-02-15 05:55:01.918821 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:01.918829 | orchestrator | 2026-02-15 05:55:01.918839 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 05:55:01.918853 | orchestrator | Sunday 15 February 2026 05:54:56 +0000 (0:00:01.169) 0:01:34.265 ******* 2026-02-15 05:55:01.918866 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:01.918878 | orchestrator | 2026-02-15 05:55:01.918891 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 05:55:01.918904 | orchestrator | Sunday 15 February 2026 05:54:57 +0000 (0:00:01.248) 0:01:35.513 ******* 2026-02-15 05:55:01.918917 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:01.918930 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:01.918942 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:01.918955 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:01.918968 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:01.918981 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:01.918995 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:01.919006 | orchestrator | 2026-02-15 05:55:01.919022 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 05:55:01.919030 | orchestrator | Sunday 15 February 2026 05:54:59 +0000 (0:00:02.482) 0:01:37.996 ******* 2026-02-15 05:55:01.919038 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:01.919046 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:01.919053 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:01.919061 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:01.919069 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:01.919077 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:01.919093 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:12.840868 | orchestrator | 2026-02-15 05:55:12.840984 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 05:55:12.841000 | orchestrator | Sunday 15 February 2026 05:55:01 +0000 (0:00:02.012) 0:01:40.008 ******* 2026-02-15 05:55:12.841011 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:12.841023 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:12.841033 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:12.841042 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:12.841052 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:12.841062 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:12.841072 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:12.841082 | orchestrator | 2026-02-15 05:55:12.841092 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 05:55:12.841101 | orchestrator | Sunday 15 February 2026 05:55:04 +0000 (0:00:02.200) 0:01:42.209 ******* 2026-02-15 05:55:12.841111 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:12.841120 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:12.841130 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:12.841139 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:12.841149 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:12.841158 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:12.841168 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:12.841177 | orchestrator | 2026-02-15 05:55:12.841187 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 05:55:12.841197 | orchestrator | Sunday 15 February 2026 05:55:06 +0000 (0:00:01.941) 0:01:44.151 ******* 2026-02-15 05:55:12.841206 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:12.841216 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:12.841225 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:12.841235 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:12.841244 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:12.841254 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:12.841264 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:12.841274 | orchestrator | 2026-02-15 05:55:12.841284 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 05:55:12.841294 | orchestrator | Sunday 15 February 2026 05:55:08 +0000 (0:00:02.177) 0:01:46.328 ******* 2026-02-15 05:55:12.841303 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:12.841313 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:12.841322 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:12.841332 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:12.841357 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:12.841368 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:12.841379 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:12.841390 | orchestrator | 2026-02-15 05:55:12.841402 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 05:55:12.841414 | orchestrator | Sunday 15 February 2026 05:55:10 +0000 (0:00:02.149) 0:01:48.478 ******* 2026-02-15 05:55:12.841425 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:12.841436 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:12.841447 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:12.841458 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:12.841488 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:12.841499 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:12.841510 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:12.841520 | orchestrator | 2026-02-15 05:55:12.841531 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 05:55:12.841542 | orchestrator | Sunday 15 February 2026 05:55:12 +0000 (0:00:02.281) 0:01:50.759 ******* 2026-02-15 05:55:12.841556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:12.841569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:12.841581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:12.841611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 05:55:12.841626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:12.841637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:12.841649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:12.841671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37951a5f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:55:12.841693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:12.841714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192362 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:13.192457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 05:55:13.192545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47bb0aa1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:55:13.192617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192628 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.192668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 05:55:13.192685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.470646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.470761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.470881 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1976e1cf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:55:13.470907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.470922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.470938 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:13.470977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.470994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'uuids': ['d94e5f79-6313-45be-bfeb-6c020052505d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d']}})  2026-02-15 05:55:13.471026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30e735a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:55:13.471043 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769']}})  2026-02-15 05:55:13.471059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.471075 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:13.471091 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.471106 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 05:55:13.471131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.519125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV', 'dm-uuid-CRYPT-LUKS2-00e62f5af87144e797787951ba7c7c75-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 05:55:13.519253 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.519272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'uuids': ['00e62f5a-f871-44e7-9778-7951ba7c7c75'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV']}})  2026-02-15 05:55:13.519287 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55']}})  2026-02-15 05:55:13.519299 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.519332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cdab0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:55:13.519365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.519377 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.519389 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.519401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d', 'dm-uuid-CRYPT-LUKS2-d94e5f79631345bebfeb6c020052505d-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 05:55:13.519414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'uuids': ['65aea23d-0c6f-484a-a24c-521c476a1576'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k']}})  2026-02-15 05:55:13.519459 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7cc59cd1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:55:13.670532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800']}})  2026-02-15 05:55:13.670649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.670668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.670681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 05:55:13.670694 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:13.670707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.670718 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24', 'dm-uuid-CRYPT-LUKS2-d6fb5e45582d485d831faba7ab4bd3c7-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 05:55:13.670730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.670780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'uuids': ['d6fb5e45-582d-485d-831f-aba7ab4bd3c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24']}})  2026-02-15 05:55:13.670871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee']}})  2026-02-15 05:55:13.670887 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.670902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7713f0f4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:55:13.670932 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.840024 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.840137 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k', 'dm-uuid-CRYPT-LUKS2-65aea23d0c6f484aa24c521c476a1576-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 05:55:13.840154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.840166 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'uuids': ['dcdf938a-1e00-4f8c-ba32-16bd01cbd7b7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z']}})  2026-02-15 05:55:13.840179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ca6afbc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:55:13.840190 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9']}})  2026-02-15 05:55:13.840219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.840254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.840281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 05:55:13.840299 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:13.840316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.840334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP', 'dm-uuid-CRYPT-LUKS2-ddc473233b6d4a8581ea0c389df91130-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 05:55:13.840350 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:13.840364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'uuids': ['ddc47323-3b6d-4a85-81ea-0c389df91130'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP']}})  2026-02-15 05:55:13.840391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978']}})  2026-02-15 05:55:13.840419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.284210 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3b30427', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:55:15.284363 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.284413 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.284429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.284445 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.284482 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.284511 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-29-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 05:55:15.284587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z', 'dm-uuid-CRYPT-LUKS2-dcdf938a1e004f8cba3216bd01cbd7b7-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 05:55:15.284605 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.284622 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:15.284641 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.284658 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.284726 | orchestrator | skipping: [testbed-manager] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e', 'scsi-SQEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f6c6941f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part16', 'scsi-SQEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part14', 'scsi-SQEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part15', 'scsi-SQEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part1', 'scsi-SQEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:55:15.554085 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.554206 | orchestrator | skipping: [testbed-manager] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:55:15.554232 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:15.554255 | orchestrator | 2026-02-15 05:55:15.554274 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 05:55:15.554294 | orchestrator | Sunday 15 February 2026 05:55:15 +0000 (0:00:02.611) 0:01:53.370 ******* 2026-02-15 05:55:15.554317 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.554366 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.554384 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.554404 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.554467 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.554491 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.554510 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.554545 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37951a5f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.554582 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.567749 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.567829 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.567858 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.567870 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.567883 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.567905 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.567966 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.567979 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.568000 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47bb0aa1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.568019 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.568039 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923135 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:15.923330 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923354 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923366 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923379 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923391 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923424 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923472 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923511 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1976e1cf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923547 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923565 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:15.923583 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:15.923605 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067178 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'uuids': ['d94e5f79-6313-45be-bfeb-6c020052505d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067277 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30e735a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067294 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067368 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067397 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067410 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:16.067424 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV', 'dm-uuid-CRYPT-LUKS2-00e62f5af87144e797787951ba7c7c75-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067448 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067465 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'uuids': ['00e62f5a-f871-44e7-9778-7951ba7c7c75'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.067493 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.216035 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.216155 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cdab0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.216195 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.216227 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.216241 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d', 'dm-uuid-CRYPT-LUKS2-d94e5f79631345bebfeb6c020052505d-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.216254 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.216272 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'uuids': ['65aea23d-0c6f-484a-a24c-521c476a1576'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.216294 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7cc59cd1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.216314 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.386704 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.386909 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.386942 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:16.386970 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.387042 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.387067 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24', 'dm-uuid-CRYPT-LUKS2-d6fb5e45582d485d831faba7ab4bd3c7-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.387088 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.387137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'uuids': ['d6fb5e45-582d-485d-831f-aba7ab4bd3c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.387160 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.387190 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.387223 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.387263 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7713f0f4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467658 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'uuids': ['dcdf938a-1e00-4f8c-ba32-16bd01cbd7b7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467786 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467861 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ca6afbc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467873 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k', 'dm-uuid-CRYPT-LUKS2-65aea23d0c6f484aa24c521c476a1576-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467912 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467939 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467950 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467960 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467971 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467982 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP', 'dm-uuid-CRYPT-LUKS2-ddc473233b6d4a8581ea0c389df91130-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.467999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.829606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'uuids': ['ddc47323-3b6d-4a85-81ea-0c389df91130'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.829696 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978']}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.829711 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.829741 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3b30427', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.829771 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.829778 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.829785 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z', 'dm-uuid-CRYPT-LUKS2-dcdf938a1e004f8cba3216bd01cbd7b7-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.829825 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:16.829835 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:16.829842 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.829851 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:16.829872 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:30.176790 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-29-02-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1060', 'sectorsize': '2048', 'size': '530.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:30.176924 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:30.176937 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:30.176945 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:30.176977 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e', 'scsi-SQEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'f6c6941f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part16', 'scsi-SQEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part14', 'scsi-SQEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part15', 'scsi-SQEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part1', 'scsi-SQEMU_QEMU_HARDDISK_f6c6941f-d825-4354-824d-63e95e31c47e-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:30.177008 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:30.177017 | orchestrator | skipping: [testbed-manager] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:55:30.177026 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:30.177035 | orchestrator | 2026-02-15 05:55:30.177043 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 05:55:30.177052 | orchestrator | Sunday 15 February 2026 05:55:17 +0000 (0:00:02.553) 0:01:55.924 ******* 2026-02-15 05:55:30.177059 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:55:30.177068 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:55:30.177075 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:55:30.177082 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:55:30.177089 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:55:30.177102 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:55:30.177110 | orchestrator | ok: [testbed-manager] 2026-02-15 05:55:30.177117 | orchestrator | 2026-02-15 05:55:30.177125 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 05:55:30.177132 | orchestrator | Sunday 15 February 2026 05:55:20 +0000 (0:00:02.591) 0:01:58.515 ******* 2026-02-15 05:55:30.177140 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:55:30.177147 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:55:30.177154 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:55:30.177161 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:55:30.177169 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:55:30.177176 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:55:30.177183 | orchestrator | ok: [testbed-manager] 2026-02-15 05:55:30.177190 | orchestrator | 2026-02-15 05:55:30.177198 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 05:55:30.177205 | orchestrator | Sunday 15 February 2026 05:55:22 +0000 (0:00:02.080) 0:02:00.596 ******* 2026-02-15 05:55:30.177212 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:55:30.177220 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:55:30.177227 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:55:30.177234 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:55:30.177241 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:30.177249 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:55:30.177256 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:55:30.177264 | orchestrator | 2026-02-15 05:55:30.177271 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 05:55:30.177279 | orchestrator | Sunday 15 February 2026 05:55:25 +0000 (0:00:02.903) 0:02:03.500 ******* 2026-02-15 05:55:30.177287 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:30.177294 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:30.177302 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:30.177309 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:30.177317 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:30.177324 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:30.177332 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:30.177340 | orchestrator | 2026-02-15 05:55:30.177350 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 05:55:30.177358 | orchestrator | Sunday 15 February 2026 05:55:27 +0000 (0:00:01.970) 0:02:05.470 ******* 2026-02-15 05:55:30.177367 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:30.177375 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:30.177384 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:30.177392 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:30.177406 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:59.193164 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:59.193260 | orchestrator | ok: [testbed-manager -> testbed-node-2(192.168.16.12)] 2026-02-15 05:55:59.193270 | orchestrator | 2026-02-15 05:55:59.193278 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 05:55:59.193286 | orchestrator | Sunday 15 February 2026 05:55:30 +0000 (0:00:02.793) 0:02:08.263 ******* 2026-02-15 05:55:59.193293 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:59.193300 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:59.193306 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:59.193313 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:59.193320 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:59.193326 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:59.193333 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:59.193339 | orchestrator | 2026-02-15 05:55:59.193347 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 05:55:59.193354 | orchestrator | Sunday 15 February 2026 05:55:32 +0000 (0:00:02.001) 0:02:10.265 ******* 2026-02-15 05:55:59.193361 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:55:59.193368 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-15 05:55:59.193390 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-15 05:55:59.193397 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-15 05:55:59.193404 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-15 05:55:59.193410 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 05:55:59.193417 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-15 05:55:59.193423 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-15 05:55:59.193430 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-15 05:55:59.193436 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-15 05:55:59.193442 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 05:55:59.193449 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-15 05:55:59.193455 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-15 05:55:59.193462 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-15 05:55:59.193468 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-15 05:55:59.193475 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-15 05:55:59.193481 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-15 05:55:59.193487 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-15 05:55:59.193494 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-02-15 05:55:59.193500 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-02-15 05:55:59.193507 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-02-15 05:55:59.193513 | orchestrator | 2026-02-15 05:55:59.193520 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 05:55:59.193526 | orchestrator | Sunday 15 February 2026 05:55:35 +0000 (0:00:03.344) 0:02:13.610 ******* 2026-02-15 05:55:59.193533 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 05:55:59.193540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 05:55:59.193547 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 05:55:59.193553 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:59.193560 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 05:55:59.193566 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 05:55:59.193573 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 05:55:59.193579 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:59.193586 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 05:55:59.193592 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 05:55:59.193599 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 05:55:59.193605 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:59.193612 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 05:55:59.193618 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 05:55:59.193625 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 05:55:59.193631 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:59.193638 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-15 05:55:59.193644 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-15 05:55:59.193651 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-15 05:55:59.193657 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:59.193664 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-15 05:55:59.193670 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-15 05:55:59.193676 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-15 05:55:59.193683 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:59.193690 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-02-15 05:55:59.193696 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-02-15 05:55:59.193708 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-02-15 05:55:59.193715 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:59.193721 | orchestrator | 2026-02-15 05:55:59.193728 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 05:55:59.193735 | orchestrator | Sunday 15 February 2026 05:55:37 +0000 (0:00:02.247) 0:02:15.857 ******* 2026-02-15 05:55:59.193741 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:55:59.193748 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:55:59.193755 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:55:59.193761 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:55:59.193785 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 05:55:59.193793 | orchestrator | 2026-02-15 05:55:59.193800 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 05:55:59.193808 | orchestrator | Sunday 15 February 2026 05:55:39 +0000 (0:00:01.986) 0:02:17.844 ******* 2026-02-15 05:55:59.193814 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:59.193821 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:59.193848 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:59.193855 | orchestrator | 2026-02-15 05:55:59.193861 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 05:55:59.193868 | orchestrator | Sunday 15 February 2026 05:55:41 +0000 (0:00:01.626) 0:02:19.471 ******* 2026-02-15 05:55:59.193875 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:59.193881 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:59.193887 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:59.193894 | orchestrator | 2026-02-15 05:55:59.193901 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 05:55:59.193907 | orchestrator | Sunday 15 February 2026 05:55:42 +0000 (0:00:01.345) 0:02:20.817 ******* 2026-02-15 05:55:59.193914 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:59.193988 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:55:59.193996 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:55:59.194003 | orchestrator | 2026-02-15 05:55:59.194009 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 05:55:59.194057 | orchestrator | Sunday 15 February 2026 05:55:44 +0000 (0:00:01.501) 0:02:22.319 ******* 2026-02-15 05:55:59.194064 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:55:59.194071 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:55:59.194077 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:55:59.194084 | orchestrator | 2026-02-15 05:55:59.194091 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 05:55:59.194097 | orchestrator | Sunday 15 February 2026 05:55:45 +0000 (0:00:01.671) 0:02:23.991 ******* 2026-02-15 05:55:59.194110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 05:55:59.194117 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 05:55:59.194124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 05:55:59.194131 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:59.194137 | orchestrator | 2026-02-15 05:55:59.194144 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 05:55:59.194150 | orchestrator | Sunday 15 February 2026 05:55:47 +0000 (0:00:01.703) 0:02:25.694 ******* 2026-02-15 05:55:59.194157 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 05:55:59.194163 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 05:55:59.194170 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 05:55:59.194176 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:59.194183 | orchestrator | 2026-02-15 05:55:59.194189 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 05:55:59.194196 | orchestrator | Sunday 15 February 2026 05:55:49 +0000 (0:00:01.711) 0:02:27.405 ******* 2026-02-15 05:55:59.194221 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 05:55:59.194228 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 05:55:59.194234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 05:55:59.194241 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:55:59.194247 | orchestrator | 2026-02-15 05:55:59.194254 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 05:55:59.194261 | orchestrator | Sunday 15 February 2026 05:55:50 +0000 (0:00:01.663) 0:02:29.068 ******* 2026-02-15 05:55:59.194267 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:55:59.194274 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:55:59.194280 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:55:59.194287 | orchestrator | 2026-02-15 05:55:59.194293 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 05:55:59.194300 | orchestrator | Sunday 15 February 2026 05:55:52 +0000 (0:00:01.443) 0:02:30.512 ******* 2026-02-15 05:55:59.194306 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 05:55:59.194313 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-15 05:55:59.194319 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-15 05:55:59.194326 | orchestrator | 2026-02-15 05:55:59.194333 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 05:55:59.194339 | orchestrator | Sunday 15 February 2026 05:55:54 +0000 (0:00:01.633) 0:02:32.146 ******* 2026-02-15 05:55:59.194345 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:55:59.194352 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 05:55:59.194360 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 05:55:59.194366 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 05:55:59.194373 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 05:55:59.194379 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 05:55:59.194386 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 05:55:59.194392 | orchestrator | 2026-02-15 05:55:59.194399 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 05:55:59.194405 | orchestrator | Sunday 15 February 2026 05:55:56 +0000 (0:00:02.094) 0:02:34.240 ******* 2026-02-15 05:55:59.194412 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:55:59.194418 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 05:55:59.194425 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 05:55:59.194438 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 05:56:46.740202 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 05:56:46.740364 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 05:56:46.740382 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 05:56:46.740393 | orchestrator | 2026-02-15 05:56:46.740405 | orchestrator | TASK [ceph-infra : Update cache for Debian based OSs] ************************** 2026-02-15 05:56:46.740419 | orchestrator | Sunday 15 February 2026 05:55:59 +0000 (0:00:03.037) 0:02:37.278 ******* 2026-02-15 05:56:46.740438 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:56:46.740457 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:56:46.740475 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:56:46.740492 | orchestrator | changed: [testbed-manager] 2026-02-15 05:56:46.740509 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:56:46.740525 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:56:46.740542 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:56:46.740615 | orchestrator | 2026-02-15 05:56:46.740636 | orchestrator | TASK [ceph-infra : Include_tasks configure_firewall.yml] *********************** 2026-02-15 05:56:46.740654 | orchestrator | Sunday 15 February 2026 05:56:10 +0000 (0:00:11.069) 0:02:48.347 ******* 2026-02-15 05:56:46.740673 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.740694 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.740716 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.740735 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.740749 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.740761 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.740774 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.740785 | orchestrator | 2026-02-15 05:56:46.740797 | orchestrator | TASK [ceph-infra : Include_tasks setup_ntp.yml] ******************************** 2026-02-15 05:56:46.740809 | orchestrator | Sunday 15 February 2026 05:56:12 +0000 (0:00:02.200) 0:02:50.548 ******* 2026-02-15 05:56:46.740821 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.740834 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.740852 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.740898 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.740918 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.740935 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.740949 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.740961 | orchestrator | 2026-02-15 05:56:46.740974 | orchestrator | TASK [ceph-infra : Add logrotate configuration] ******************************** 2026-02-15 05:56:46.740986 | orchestrator | Sunday 15 February 2026 05:56:14 +0000 (0:00:02.238) 0:02:52.786 ******* 2026-02-15 05:56:46.740999 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.741011 | orchestrator | changed: [testbed-node-1] 2026-02-15 05:56:46.741023 | orchestrator | changed: [testbed-node-2] 2026-02-15 05:56:46.741035 | orchestrator | changed: [testbed-node-0] 2026-02-15 05:56:46.741047 | orchestrator | changed: [testbed-node-3] 2026-02-15 05:56:46.741059 | orchestrator | changed: [testbed-node-5] 2026-02-15 05:56:46.741071 | orchestrator | changed: [testbed-node-4] 2026-02-15 05:56:46.741089 | orchestrator | 2026-02-15 05:56:46.741107 | orchestrator | TASK [ceph-validate : Include check_system.yml] ******************************** 2026-02-15 05:56:46.741125 | orchestrator | Sunday 15 February 2026 05:56:17 +0000 (0:00:03.152) 0:02:55.939 ******* 2026-02-15 05:56:46.741144 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_system.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-15 05:56:46.741163 | orchestrator | 2026-02-15 05:56:46.741180 | orchestrator | TASK [ceph-validate : Fail on unsupported ansible version (1.X)] *************** 2026-02-15 05:56:46.741198 | orchestrator | Sunday 15 February 2026 05:56:20 +0000 (0:00:03.038) 0:02:58.977 ******* 2026-02-15 05:56:46.741216 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.741235 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.741255 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.741273 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.741333 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.741346 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.741356 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.741367 | orchestrator | 2026-02-15 05:56:46.741378 | orchestrator | TASK [ceph-validate : Fail on unsupported system] ****************************** 2026-02-15 05:56:46.741389 | orchestrator | Sunday 15 February 2026 05:56:22 +0000 (0:00:01.902) 0:03:00.880 ******* 2026-02-15 05:56:46.741399 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.741410 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.741420 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.741436 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.741453 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.741472 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.741489 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.741514 | orchestrator | 2026-02-15 05:56:46.741525 | orchestrator | TASK [ceph-validate : Fail on unsupported architecture] ************************ 2026-02-15 05:56:46.741536 | orchestrator | Sunday 15 February 2026 05:56:24 +0000 (0:00:02.159) 0:03:03.039 ******* 2026-02-15 05:56:46.741546 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.741557 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.741567 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.741578 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.741589 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.741599 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.741610 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.741620 | orchestrator | 2026-02-15 05:56:46.741631 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution] ************************ 2026-02-15 05:56:46.741642 | orchestrator | Sunday 15 February 2026 05:56:26 +0000 (0:00:02.014) 0:03:05.054 ******* 2026-02-15 05:56:46.741653 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.741663 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.741673 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.741688 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.741706 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.741723 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.741748 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.741766 | orchestrator | 2026-02-15 05:56:46.741811 | orchestrator | TASK [ceph-validate : Fail on unsupported CentOS release] ********************** 2026-02-15 05:56:46.741830 | orchestrator | Sunday 15 February 2026 05:56:29 +0000 (0:00:02.199) 0:03:07.253 ******* 2026-02-15 05:56:46.741842 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.741890 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.741904 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.741915 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.741925 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.741936 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.741946 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.741957 | orchestrator | 2026-02-15 05:56:46.741967 | orchestrator | TASK [ceph-validate : Fail on unsupported distribution for ubuntu cloud archive] *** 2026-02-15 05:56:46.741979 | orchestrator | Sunday 15 February 2026 05:56:31 +0000 (0:00:02.004) 0:03:09.258 ******* 2026-02-15 05:56:46.741989 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.742000 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.742010 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.742095 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.742107 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.742118 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.742129 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.742140 | orchestrator | 2026-02-15 05:56:46.742151 | orchestrator | TASK [ceph-validate : Fail on unsupported SUSE/openSUSE distribution (only 15.x supported)] *** 2026-02-15 05:56:46.742162 | orchestrator | Sunday 15 February 2026 05:56:33 +0000 (0:00:02.219) 0:03:11.478 ******* 2026-02-15 05:56:46.742173 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.742184 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.742194 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.742205 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.742216 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.742227 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.742238 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.742252 | orchestrator | 2026-02-15 05:56:46.742271 | orchestrator | TASK [ceph-validate : Fail if systemd is not present] ************************** 2026-02-15 05:56:46.742290 | orchestrator | Sunday 15 February 2026 05:56:35 +0000 (0:00:02.064) 0:03:13.543 ******* 2026-02-15 05:56:46.742308 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.742325 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.742342 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.742375 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.742395 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.742413 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.742430 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.742441 | orchestrator | 2026-02-15 05:56:46.742452 | orchestrator | TASK [ceph-validate : Validate repository variables in non-containerized scenario] *** 2026-02-15 05:56:46.742462 | orchestrator | Sunday 15 February 2026 05:56:37 +0000 (0:00:02.143) 0:03:15.686 ******* 2026-02-15 05:56:46.742473 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.742484 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.742494 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.742504 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.742515 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.742525 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.742536 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.742546 | orchestrator | 2026-02-15 05:56:46.742557 | orchestrator | TASK [ceph-validate : Validate osd_objectstore] ******************************** 2026-02-15 05:56:46.742567 | orchestrator | Sunday 15 February 2026 05:56:39 +0000 (0:00:02.104) 0:03:17.791 ******* 2026-02-15 05:56:46.742578 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.742589 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.742599 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.742610 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.742620 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.742631 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.742641 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.742652 | orchestrator | 2026-02-15 05:56:46.742663 | orchestrator | TASK [ceph-validate : Validate radosgw network configuration] ****************** 2026-02-15 05:56:46.742673 | orchestrator | Sunday 15 February 2026 05:56:41 +0000 (0:00:01.917) 0:03:19.709 ******* 2026-02-15 05:56:46.742684 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.742694 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.742705 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.742715 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.742726 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.742736 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.742747 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.742757 | orchestrator | 2026-02-15 05:56:46.742768 | orchestrator | TASK [ceph-validate : Validate lvm osd scenario] ******************************* 2026-02-15 05:56:46.742779 | orchestrator | Sunday 15 February 2026 05:56:43 +0000 (0:00:02.182) 0:03:21.891 ******* 2026-02-15 05:56:46.742789 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.742800 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.742810 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.742821 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.742831 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:56:46.742842 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:56:46.742852 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:56:46.742931 | orchestrator | 2026-02-15 05:56:46.742942 | orchestrator | TASK [ceph-validate : Validate bluestore lvm osd scenario] ********************* 2026-02-15 05:56:46.742953 | orchestrator | Sunday 15 February 2026 05:56:45 +0000 (0:00:02.065) 0:03:23.956 ******* 2026-02-15 05:56:46.742964 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:56:46.742975 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:56:46.742985 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:56:46.742998 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 05:56:46.743011 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 05:56:46.743029 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:56:46.743053 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 05:57:11.939120 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 05:57:11.939274 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.939300 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 05:57:11.939313 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 05:57:11.939325 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.939336 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:11.939347 | orchestrator | 2026-02-15 05:57:11.939359 | orchestrator | TASK [ceph-validate : Fail if local scenario is enabled on debian] ************* 2026-02-15 05:57:11.939371 | orchestrator | Sunday 15 February 2026 05:56:48 +0000 (0:00:02.309) 0:03:26.265 ******* 2026-02-15 05:57:11.939382 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:11.939393 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:11.939404 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:11.939415 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.939426 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.939436 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.939447 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:11.939457 | orchestrator | 2026-02-15 05:57:11.939468 | orchestrator | TASK [ceph-validate : Fail if rhcs repository is enabled on debian] ************ 2026-02-15 05:57:11.939479 | orchestrator | Sunday 15 February 2026 05:56:50 +0000 (0:00:01.912) 0:03:28.178 ******* 2026-02-15 05:57:11.939490 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:11.939506 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:11.939525 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:11.939543 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.939563 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.939582 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.939602 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:11.939621 | orchestrator | 2026-02-15 05:57:11.939641 | orchestrator | TASK [ceph-validate : Check ceph_origin definition on SUSE/openSUSE Leap] ****** 2026-02-15 05:57:11.939656 | orchestrator | Sunday 15 February 2026 05:56:52 +0000 (0:00:02.236) 0:03:30.415 ******* 2026-02-15 05:57:11.939669 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:11.939682 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:11.939694 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:11.939706 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.939719 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.939731 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.939744 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:11.939756 | orchestrator | 2026-02-15 05:57:11.939769 | orchestrator | TASK [ceph-validate : Check ceph_repository definition on SUSE/openSUSE Leap] *** 2026-02-15 05:57:11.939781 | orchestrator | Sunday 15 February 2026 05:56:54 +0000 (0:00:02.009) 0:03:32.424 ******* 2026-02-15 05:57:11.939794 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:11.939806 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:11.939818 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:11.939831 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.939843 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.939855 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.939867 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:11.939915 | orchestrator | 2026-02-15 05:57:11.939933 | orchestrator | TASK [ceph-validate : Validate ntp daemon type] ******************************** 2026-02-15 05:57:11.939953 | orchestrator | Sunday 15 February 2026 05:56:56 +0000 (0:00:02.298) 0:03:34.723 ******* 2026-02-15 05:57:11.939972 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:11.940030 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:11.940044 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:11.940054 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.940065 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.940075 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.940086 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:11.940097 | orchestrator | 2026-02-15 05:57:11.940107 | orchestrator | TASK [ceph-validate : Abort if ntp_daemon_type is ntpd on Atomic] ************** 2026-02-15 05:57:11.940118 | orchestrator | Sunday 15 February 2026 05:56:58 +0000 (0:00:02.049) 0:03:36.773 ******* 2026-02-15 05:57:11.940129 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:11.940139 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:11.940150 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:11.940161 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.940171 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.940181 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.940192 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:11.940203 | orchestrator | 2026-02-15 05:57:11.940213 | orchestrator | TASK [ceph-validate : Include check_devices.yml] ******************************* 2026-02-15 05:57:11.940224 | orchestrator | Sunday 15 February 2026 05:57:00 +0000 (0:00:02.120) 0:03:38.894 ******* 2026-02-15 05:57:11.940234 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:11.940245 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:11.940263 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:11.940281 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:11.940299 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_devices.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 05:57:11.940318 | orchestrator | 2026-02-15 05:57:11.940337 | orchestrator | TASK [ceph-validate : Set_fact root_device] ************************************ 2026-02-15 05:57:11.940358 | orchestrator | Sunday 15 February 2026 05:57:03 +0000 (0:00:02.507) 0:03:41.401 ******* 2026-02-15 05:57:11.940376 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:57:11.940395 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:57:11.940406 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:57:11.940417 | orchestrator | 2026-02-15 05:57:11.940444 | orchestrator | TASK [ceph-validate : Resolve devices in lvm_volumes] ************************** 2026-02-15 05:57:11.940455 | orchestrator | Sunday 15 February 2026 05:57:04 +0000 (0:00:01.451) 0:03:42.852 ******* 2026-02-15 05:57:11.940487 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 05:57:11.940498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 05:57:11.940509 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.940520 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 05:57:11.940531 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 05:57:11.940542 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.940553 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 05:57:11.940564 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 05:57:11.940574 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.940585 | orchestrator | 2026-02-15 05:57:11.940596 | orchestrator | TASK [ceph-validate : Set_fact lvm_volumes_data_devices] *********************** 2026-02-15 05:57:11.940607 | orchestrator | Sunday 15 February 2026 05:57:06 +0000 (0:00:01.437) 0:03:44.290 ******* 2026-02-15 05:57:11.940636 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:11.940658 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:11.940676 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.940696 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:11.940717 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:11.940736 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.940755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:11.940770 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.data_vg is undefined', 'item': {'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:11.940780 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.940791 | orchestrator | 2026-02-15 05:57:11.940802 | orchestrator | TASK [ceph-validate : Fail if root_device is passed in lvm_volumes or devices] *** 2026-02-15 05:57:11.940813 | orchestrator | Sunday 15 February 2026 05:57:07 +0000 (0:00:01.676) 0:03:45.967 ******* 2026-02-15 05:57:11.940824 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.940835 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.940846 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.940857 | orchestrator | 2026-02-15 05:57:11.940867 | orchestrator | TASK [ceph-validate : Get devices information] ********************************* 2026-02-15 05:57:11.940957 | orchestrator | Sunday 15 February 2026 05:57:09 +0000 (0:00:01.355) 0:03:47.323 ******* 2026-02-15 05:57:11.940976 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.940994 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:11.941012 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:11.941030 | orchestrator | 2026-02-15 05:57:11.941058 | orchestrator | TASK [ceph-validate : Fail if one of the devices is not a device] ************** 2026-02-15 05:57:11.941078 | orchestrator | Sunday 15 February 2026 05:57:10 +0000 (0:00:01.389) 0:03:48.712 ******* 2026-02-15 05:57:11.941097 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:11.941127 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:17.180459 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:17.180565 | orchestrator | 2026-02-15 05:57:17.180582 | orchestrator | TASK [ceph-validate : Fail when gpt header found on osd devices] *************** 2026-02-15 05:57:17.180596 | orchestrator | Sunday 15 February 2026 05:57:11 +0000 (0:00:01.317) 0:03:50.030 ******* 2026-02-15 05:57:17.180608 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:17.180619 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:17.180654 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:17.180665 | orchestrator | 2026-02-15 05:57:17.180677 | orchestrator | TASK [ceph-validate : Check data logical volume] ******************************* 2026-02-15 05:57:17.180688 | orchestrator | Sunday 15 February 2026 05:57:13 +0000 (0:00:01.388) 0:03:51.418 ******* 2026-02-15 05:57:17.180699 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'}) 2026-02-15 05:57:17.180712 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'}) 2026-02-15 05:57:17.180723 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'}) 2026-02-15 05:57:17.180733 | orchestrator | ok: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'}) 2026-02-15 05:57:17.180744 | orchestrator | ok: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'}) 2026-02-15 05:57:17.180755 | orchestrator | ok: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'}) 2026-02-15 05:57:17.180765 | orchestrator | 2026-02-15 05:57:17.180776 | orchestrator | TASK [ceph-validate : Fail if one of the data logical volume is not a device or doesn't exist] *** 2026-02-15 05:57:17.180788 | orchestrator | Sunday 15 February 2026 05:57:15 +0000 (0:00:02.202) 0:03:53.621 ******* 2026-02-15 05:57:17.180804 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-11907033-e329-56e1-bf1e-182edc1a3769/osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 956, 'dev': 6, 'nlink': 1, 'atime': 1771127405.8968446, 'mtime': 1771127405.8908446, 'ctime': 1771127405.8908446, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-11907033-e329-56e1-bf1e-182edc1a3769/osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:17.180852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55/osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 966, 'dev': 6, 'nlink': 1, 'atime': 1771127424.7901351, 'mtime': 1771127424.784135, 'ctime': 1771127424.784135, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55/osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:17.180919 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:17.180934 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-85fe8ada-5694-5853-9626-8b4c90604800/osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1771127408.0587473, 'mtime': 1771127408.052747, 'ctime': 1771127408.052747, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-85fe8ada-5694-5853-9626-8b4c90604800/osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:17.180946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee/osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1771127426.912034, 'mtime': 1771127426.9080338, 'ctime': 1771127426.9080338, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee/osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:17.180958 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:17.180985 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-37190823-1b54-548e-8f85-c0a5c63b57f9/osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 950, 'dev': 6, 'nlink': 1, 'atime': 1771127406.1353364, 'mtime': 1771127406.1303365, 'ctime': 1771127406.1303365, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64512, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-37190823-1b54-548e-8f85-c0a5c63b57f9/osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.399809 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'stat': {'exists': True, 'path': '/dev/ceph-fe68aa92-7c5f-5213-9184-27150181e978/osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'mode': '0660', 'isdir': False, 'ischr': False, 'isblk': True, 'isreg': False, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 6, 'size': 0, 'inode': 960, 'dev': 6, 'nlink': 1, 'atime': 1771127424.6916175, 'mtime': 1771127424.6886175, 'ctime': 1771127424.6886175, 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': True, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': False, 'xoth': False, 'isuid': False, 'isgid': False, 'blocks': 0, 'block_size': 512, 'device_type': 64513, 'readable': True, 'writeable': True, 'executable': False, 'pw_name': 'root', 'gr_name': 'disk', 'mimetype': 'inode/symlink', 'charset': 'binary', 'version': None, 'attributes': [], 'attr_flags': ''}, 'invocation': {'module_args': {'path': '/dev/ceph-fe68aa92-7c5f-5213-9184-27150181e978/osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'follow': True, 'get_checksum': True, 'get_mime': True, 'get_attributes': True, 'checksum_algorithm': 'sha1'}}, 'failed': False, 'item': {'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.399989 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:23.400020 | orchestrator | 2026-02-15 05:57:23.400042 | orchestrator | TASK [ceph-validate : Check bluestore db logical volume] *********************** 2026-02-15 05:57:23.400063 | orchestrator | Sunday 15 February 2026 05:57:17 +0000 (0:00:01.653) 0:03:55.275 ******* 2026-02-15 05:57:23.400084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 05:57:23.400106 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 05:57:23.400127 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:23.400140 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 05:57:23.400152 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 05:57:23.400163 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:23.400174 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 05:57:23.400185 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 05:57:23.400218 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:23.400230 | orchestrator | 2026-02-15 05:57:23.400242 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore db logical volume is not a device or doesn't exist] *** 2026-02-15 05:57:23.400254 | orchestrator | Sunday 15 February 2026 05:57:18 +0000 (0:00:01.587) 0:03:56.863 ******* 2026-02-15 05:57:23.400267 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.400281 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.400317 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:23.400331 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.400380 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.400394 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:23.400407 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.400419 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.db is defined', 'item': {'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.400431 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:23.400444 | orchestrator | 2026-02-15 05:57:23.400457 | orchestrator | TASK [ceph-validate : Check bluestore wal logical volume] ********************** 2026-02-15 05:57:23.400469 | orchestrator | Sunday 15 February 2026 05:57:20 +0000 (0:00:01.486) 0:03:58.349 ******* 2026-02-15 05:57:23.400482 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'})  2026-02-15 05:57:23.400495 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'})  2026-02-15 05:57:23.400515 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:23.400533 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'})  2026-02-15 05:57:23.400566 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'})  2026-02-15 05:57:23.400586 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:23.400603 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'})  2026-02-15 05:57:23.400622 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'})  2026-02-15 05:57:23.400639 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:23.400658 | orchestrator | 2026-02-15 05:57:23.400676 | orchestrator | TASK [ceph-validate : Fail if one of the bluestore wal logical volume is not a device or doesn't exist] *** 2026-02-15 05:57:23.400695 | orchestrator | Sunday 15 February 2026 05:57:21 +0000 (0:00:01.671) 0:04:00.021 ******* 2026-02-15 05:57:23.400715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-11907033-e329-56e1-bf1e-182edc1a3769', 'data_vg': 'ceph-11907033-e329-56e1-bf1e-182edc1a3769'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.400830 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-308eeb04-119e-5b1b-acdb-31959eb9ce55', 'data_vg': 'ceph-308eeb04-119e-5b1b-acdb-31959eb9ce55'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.400854 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:23.400874 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-85fe8ada-5694-5853-9626-8b4c90604800', 'data_vg': 'ceph-85fe8ada-5694-5853-9626-8b4c90604800'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.400921 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-12f88160-c11a-5ad6-adc7-3b0cfe47daee', 'data_vg': 'ceph-12f88160-c11a-5ad6-adc7-3b0cfe47daee'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.400940 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:23.400967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-37190823-1b54-548e-8f85-c0a5c63b57f9', 'data_vg': 'ceph-37190823-1b54-548e-8f85-c0a5c63b57f9'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:23.401001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'item.wal is defined', 'item': {'data': 'osd-block-fe68aa92-7c5f-5213-9184-27150181e978', 'data_vg': 'ceph-fe68aa92-7c5f-5213-9184-27150181e978'}, 'ansible_loop_var': 'item'})  2026-02-15 05:57:32.993679 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:32.993789 | orchestrator | 2026-02-15 05:57:32.993805 | orchestrator | TASK [ceph-validate : Include check_eth_rgw.yml] ******************************* 2026-02-15 05:57:32.993817 | orchestrator | Sunday 15 February 2026 05:57:23 +0000 (0:00:01.466) 0:04:01.487 ******* 2026-02-15 05:57:32.993829 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:32.993840 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:32.993851 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:32.993862 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:32.993873 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:32.993932 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:32.993944 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:32.993955 | orchestrator | 2026-02-15 05:57:32.993967 | orchestrator | TASK [ceph-validate : Include check_rgw_pools.yml] ***************************** 2026-02-15 05:57:32.993978 | orchestrator | Sunday 15 February 2026 05:57:25 +0000 (0:00:01.948) 0:04:03.436 ******* 2026-02-15 05:57:32.993989 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:32.994000 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:32.994011 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:32.994113 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:32.994128 | orchestrator | included: /ansible/roles/ceph-validate/tasks/check_rgw_pools.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 05:57:32.994139 | orchestrator | 2026-02-15 05:57:32.994150 | orchestrator | TASK [ceph-validate : Fail if ec_profile is not set for ec pools] ************** 2026-02-15 05:57:32.994161 | orchestrator | Sunday 15 February 2026 05:57:27 +0000 (0:00:02.522) 0:04:05.959 ******* 2026-02-15 05:57:32.994172 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994185 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994255 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994267 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994291 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994317 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:32.994329 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:32.994342 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994378 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994390 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994402 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:32.994414 | orchestrator | 2026-02-15 05:57:32.994427 | orchestrator | TASK [ceph-validate : Fail if ec_k is not set for ec pools] ******************** 2026-02-15 05:57:32.994439 | orchestrator | Sunday 15 February 2026 05:57:29 +0000 (0:00:01.417) 0:04:07.377 ******* 2026-02-15 05:57:32.994452 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994503 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994548 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:32.994562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994574 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994585 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994596 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994614 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994625 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:32.994635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994657 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994688 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:32.994698 | orchestrator | 2026-02-15 05:57:32.994709 | orchestrator | TASK [ceph-validate : Fail if ec_m is not set for ec pools] ******************** 2026-02-15 05:57:32.994720 | orchestrator | Sunday 15 February 2026 05:57:31 +0000 (0:00:01.782) 0:04:09.160 ******* 2026-02-15 05:57:32.994730 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994772 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994783 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:32.994793 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994836 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994846 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:32.994857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.994989 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.995008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 05:57:32.995019 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:32.995037 | orchestrator | 2026-02-15 05:57:32.995048 | orchestrator | TASK [ceph-validate : Include check_nfs.yml] *********************************** 2026-02-15 05:57:32.995059 | orchestrator | Sunday 15 February 2026 05:57:32 +0000 (0:00:01.484) 0:04:10.644 ******* 2026-02-15 05:57:32.995070 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:32.995081 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:32.995100 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:48.033688 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:48.033826 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:48.033842 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:48.033854 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:48.033865 | orchestrator | 2026-02-15 05:57:48.033877 | orchestrator | TASK [ceph-validate : Include check_rbdmirror.yml] ***************************** 2026-02-15 05:57:48.033889 | orchestrator | Sunday 15 February 2026 05:57:34 +0000 (0:00:02.037) 0:04:12.681 ******* 2026-02-15 05:57:48.033936 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:48.033949 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:48.033961 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:48.033972 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:48.033983 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:48.033994 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:48.034005 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:48.034093 | orchestrator | 2026-02-15 05:57:48.034123 | orchestrator | TASK [ceph-validate : Fail if monitoring group doesn't exist] ****************** 2026-02-15 05:57:48.034142 | orchestrator | Sunday 15 February 2026 05:57:36 +0000 (0:00:02.157) 0:04:14.839 ******* 2026-02-15 05:57:48.034160 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:48.034179 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:48.034198 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:48.034217 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:48.034239 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:48.034259 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:48.034280 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:48.034300 | orchestrator | 2026-02-15 05:57:48.034435 | orchestrator | TASK [ceph-validate : Fail when monitoring doesn't contain at least one node.] *** 2026-02-15 05:57:48.034458 | orchestrator | Sunday 15 February 2026 05:57:38 +0000 (0:00:02.052) 0:04:16.891 ******* 2026-02-15 05:57:48.034477 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:48.034496 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:48.034514 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:48.034533 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:48.034552 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:48.034571 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:48.034590 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:48.034609 | orchestrator | 2026-02-15 05:57:48.034628 | orchestrator | TASK [ceph-validate : Fail when dashboard_admin_password and/or grafana_admin_password are not set] *** 2026-02-15 05:57:48.034648 | orchestrator | Sunday 15 February 2026 05:57:40 +0000 (0:00:01.935) 0:04:18.826 ******* 2026-02-15 05:57:48.034666 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:48.034684 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:48.034703 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:48.034722 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:48.034740 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:48.034758 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:48.034777 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:48.034796 | orchestrator | 2026-02-15 05:57:48.034815 | orchestrator | TASK [ceph-validate : Validate container registry credentials] ***************** 2026-02-15 05:57:48.034834 | orchestrator | Sunday 15 February 2026 05:57:42 +0000 (0:00:02.087) 0:04:20.913 ******* 2026-02-15 05:57:48.034852 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:48.034870 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:48.034888 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:48.034932 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:48.034991 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:48.035011 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:48.035030 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:48.035047 | orchestrator | 2026-02-15 05:57:48.035066 | orchestrator | TASK [ceph-validate : Validate container service and container package] ******** 2026-02-15 05:57:48.035085 | orchestrator | Sunday 15 February 2026 05:57:44 +0000 (0:00:01.984) 0:04:22.898 ******* 2026-02-15 05:57:48.035103 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:48.035121 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:48.035139 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:48.035157 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:48.035175 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:48.035194 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:48.035212 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:48.035230 | orchestrator | 2026-02-15 05:57:48.035248 | orchestrator | TASK [ceph-validate : Validate openstack_keys key format] ********************** 2026-02-15 05:57:48.035266 | orchestrator | Sunday 15 February 2026 05:57:47 +0000 (0:00:02.220) 0:04:25.119 ******* 2026-02-15 05:57:48.035313 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:48.035335 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:48.035354 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:48.035367 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:57:48.035392 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:57:48.035407 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:57:48.035418 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:48.035449 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:48.035462 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:48.035473 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:48.035483 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:57:48.035494 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:57:48.035505 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:57:48.035516 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:48.035526 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:48.035548 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:48.035558 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:48.035569 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:57:48.035580 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:57:48.035591 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:57:48.035602 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:48.035612 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:48.035623 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:48.035633 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:48.035644 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:57:48.035655 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:48.035665 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:57:48.035676 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:48.035692 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:57:48.035703 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:48.035714 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:48.035746 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:57:52.411315 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:57:52.411422 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:52.411438 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:57:52.411448 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:52.411482 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:52.411493 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:52.411559 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:57:52.411571 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:52.411580 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:52.411589 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:52.411598 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:57:52.411607 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:57:52.411616 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:57:52.411625 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:57:52.411633 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:52.411642 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:57:52.411651 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:52.411659 | orchestrator | 2026-02-15 05:57:52.411669 | orchestrator | TASK [ceph-validate : Validate clients keys key format] ************************ 2026-02-15 05:57:52.411679 | orchestrator | Sunday 15 February 2026 05:57:49 +0000 (0:00:02.282) 0:04:27.402 ******* 2026-02-15 05:57:52.411688 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:52.411696 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:52.411705 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:52.411714 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:57:52.411722 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:57:52.411731 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:57:52.411739 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:57:52.411748 | orchestrator | 2026-02-15 05:57:52.411757 | orchestrator | TASK [ceph-validate : Validate openstack_keys caps] **************************** 2026-02-15 05:57:52.411766 | orchestrator | Sunday 15 February 2026 05:57:51 +0000 (0:00:02.208) 0:04:29.610 ******* 2026-02-15 05:57:52.411775 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:52.411797 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:52.411807 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:52.411856 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:57:52.411883 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:57:52.411894 | orchestrator | skipping: [testbed-node-0] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:57:52.411930 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:57:52.411942 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:52.411952 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:52.411962 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:52.411971 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:57:52.411981 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:57:52.411991 | orchestrator | skipping: [testbed-node-1] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:57:52.412000 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:57:52.412010 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:52.412020 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:52.412030 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:52.412040 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:57:52.412050 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:57:52.412059 | orchestrator | skipping: [testbed-node-2] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:57:52.412067 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:57:52.412076 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:52.412084 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:52.412093 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:52.412108 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:57:52.412129 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:52.412138 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:57:52.412147 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:57:52.412162 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:58:22.030522 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:58:22.030637 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:58:22.030655 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:58:22.030670 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:58:22.030682 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:58:22.030695 | orchestrator | skipping: [testbed-node-3] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:58:22.030706 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:58:22.030718 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:58:22.030731 | orchestrator | skipping: [testbed-node-4] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:58:22.030742 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:58:22.030753 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:58:22.030764 | orchestrator | skipping: [testbed-node-5] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:58:22.030775 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:58:22.030786 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.cinder-backup'})  2026-02-15 05:58:22.030797 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.cinder'})  2026-02-15 05:58:22.030808 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=volumes, profile rbd pool=images'}, 'mode': '0600', 'name': 'client.glance'})  2026-02-15 05:58:22.030844 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=metrics'}, 'mode': '0600', 'name': 'client.gnocchi'})  2026-02-15 05:58:22.030855 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mon': 'profile rbd', 'osd': 'profile rbd pool=images, profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups'}, 'mode': '0600', 'name': 'client.nova'})  2026-02-15 05:58:22.030867 | orchestrator | skipping: [testbed-manager] => (item={'caps': {'mgr': 'allow rw', 'mon': 'allow r', 'osd': 'allow rw pool=cephfs_data'}, 'mode': '0600', 'name': 'client.manila'})  2026-02-15 05:58:22.030878 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:58:22.030889 | orchestrator | 2026-02-15 05:58:22.030900 | orchestrator | TASK [ceph-validate : Validate clients keys caps] ****************************** 2026-02-15 05:58:22.030913 | orchestrator | Sunday 15 February 2026 05:57:53 +0000 (0:00:02.182) 0:04:31.793 ******* 2026-02-15 05:58:22.030924 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:58:22.030985 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:58:22.030998 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:58:22.031009 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:58:22.031020 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:58:22.031046 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:58:22.031058 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:58:22.031070 | orchestrator | 2026-02-15 05:58:22.031083 | orchestrator | TASK [ceph-validate : Check virtual_ips is defined] **************************** 2026-02-15 05:58:22.031096 | orchestrator | Sunday 15 February 2026 05:57:55 +0000 (0:00:02.232) 0:04:34.025 ******* 2026-02-15 05:58:22.031109 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:58:22.031122 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:58:22.031135 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:58:22.031147 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:58:22.031159 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:58:22.031173 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:58:22.031186 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:58:22.031196 | orchestrator | 2026-02-15 05:58:22.031224 | orchestrator | TASK [ceph-validate : Validate virtual_ips length] ***************************** 2026-02-15 05:58:22.031236 | orchestrator | Sunday 15 February 2026 05:57:58 +0000 (0:00:02.127) 0:04:36.152 ******* 2026-02-15 05:58:22.031247 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:58:22.031258 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:58:22.031269 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:58:22.031279 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:58:22.031290 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:58:22.031301 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:58:22.031311 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:58:22.031322 | orchestrator | 2026-02-15 05:58:22.031332 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-15 05:58:22.031343 | orchestrator | Sunday 15 February 2026 05:58:00 +0000 (0:00:02.400) 0:04:38.553 ******* 2026-02-15 05:58:22.031355 | orchestrator | included: /ansible/roles/ceph-container-engine/tasks/pre_requisites/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager 2026-02-15 05:58:22.031418 | orchestrator | 2026-02-15 05:58:22.031430 | orchestrator | TASK [ceph-container-engine : Include specific variables] ********************** 2026-02-15 05:58:22.031441 | orchestrator | Sunday 15 February 2026 05:58:03 +0000 (0:00:02.733) 0:04:41.287 ******* 2026-02-15 05:58:22.031452 | orchestrator | ok: [testbed-node-0] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-15 05:58:22.031463 | orchestrator | ok: [testbed-node-1] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-15 05:58:22.031474 | orchestrator | ok: [testbed-node-2] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-15 05:58:22.031485 | orchestrator | ok: [testbed-node-3] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-15 05:58:22.031506 | orchestrator | ok: [testbed-node-4] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-15 05:58:22.031517 | orchestrator | ok: [testbed-node-5] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-15 05:58:22.031528 | orchestrator | ok: [testbed-manager] => (item=/ansible/roles/ceph-container-engine/vars/Debian.yml) 2026-02-15 05:58:22.031538 | orchestrator | 2026-02-15 05:58:22.031549 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override directory] **** 2026-02-15 05:58:22.031560 | orchestrator | Sunday 15 February 2026 05:58:05 +0000 (0:00:02.133) 0:04:43.420 ******* 2026-02-15 05:58:22.031571 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:58:22.031581 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:58:22.031593 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:58:22.031603 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:58:22.031614 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:58:22.031625 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:58:22.031635 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:58:22.031646 | orchestrator | 2026-02-15 05:58:22.031657 | orchestrator | TASK [ceph-container-engine : Create the systemd docker override file] ********* 2026-02-15 05:58:22.031668 | orchestrator | Sunday 15 February 2026 05:58:07 +0000 (0:00:02.135) 0:04:45.555 ******* 2026-02-15 05:58:22.031679 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:58:22.031690 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:58:22.031701 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:58:22.031711 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:58:22.031722 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:58:22.031733 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:58:22.031743 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:58:22.031754 | orchestrator | 2026-02-15 05:58:22.031765 | orchestrator | TASK [ceph-container-engine : Remove docker proxy configuration] *************** 2026-02-15 05:58:22.031776 | orchestrator | Sunday 15 February 2026 05:58:09 +0000 (0:00:02.155) 0:04:47.711 ******* 2026-02-15 05:58:22.031787 | orchestrator | ok: [testbed-node-1] 2026-02-15 05:58:22.031799 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:58:22.031810 | orchestrator | ok: [testbed-node-2] 2026-02-15 05:58:22.031820 | orchestrator | ok: [testbed-node-3] 2026-02-15 05:58:22.031831 | orchestrator | ok: [testbed-node-4] 2026-02-15 05:58:22.031842 | orchestrator | ok: [testbed-node-5] 2026-02-15 05:58:22.031853 | orchestrator | ok: [testbed-manager] 2026-02-15 05:58:22.031863 | orchestrator | 2026-02-15 05:58:22.031874 | orchestrator | TASK [ceph-container-engine : Restart docker] ********************************** 2026-02-15 05:58:22.031885 | orchestrator | Sunday 15 February 2026 05:58:12 +0000 (0:00:02.598) 0:04:50.310 ******* 2026-02-15 05:58:22.031896 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:58:22.031907 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:58:22.031918 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:58:22.031928 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:58:22.031973 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:58:22.031984 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:58:22.031995 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:58:22.032006 | orchestrator | 2026-02-15 05:58:22.032016 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-15 05:58:22.032027 | orchestrator | Sunday 15 February 2026 05:58:14 +0000 (0:00:02.408) 0:04:52.718 ******* 2026-02-15 05:58:22.032038 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:58:22.032049 | orchestrator | skipping: [testbed-node-1] 2026-02-15 05:58:22.032060 | orchestrator | skipping: [testbed-node-2] 2026-02-15 05:58:22.032077 | orchestrator | skipping: [testbed-node-3] 2026-02-15 05:58:22.032088 | orchestrator | skipping: [testbed-node-4] 2026-02-15 05:58:22.032099 | orchestrator | skipping: [testbed-node-5] 2026-02-15 05:58:22.032110 | orchestrator | skipping: [testbed-manager] 2026-02-15 05:58:22.032120 | orchestrator | 2026-02-15 05:58:22.032151 | orchestrator | TASK [Get the ceph release being deployed] ************************************* 2026-02-15 05:58:22.032163 | orchestrator | Sunday 15 February 2026 05:58:17 +0000 (0:00:02.452) 0:04:55.171 ******* 2026-02-15 05:58:22.032182 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:58:22.032194 | orchestrator | 2026-02-15 05:58:22.032204 | orchestrator | TASK [Check ceph release being deployed] *************************************** 2026-02-15 05:58:22.032215 | orchestrator | Sunday 15 February 2026 05:58:19 +0000 (0:00:02.722) 0:04:57.893 ******* 2026-02-15 05:58:22.032226 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:58:22.032237 | orchestrator | 2026-02-15 05:58:22.032257 | orchestrator | PLAY [Ensure cluster config is applied] **************************************** 2026-02-15 05:59:01.931043 | orchestrator | 2026-02-15 05:59:01.931161 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 05:59:01.931179 | orchestrator | Sunday 15 February 2026 05:58:22 +0000 (0:00:02.225) 0:05:00.118 ******* 2026-02-15 05:59:01.931191 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931204 | orchestrator | 2026-02-15 05:59:01.931216 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 05:59:01.931227 | orchestrator | Sunday 15 February 2026 05:58:23 +0000 (0:00:01.480) 0:05:01.599 ******* 2026-02-15 05:59:01.931238 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931249 | orchestrator | 2026-02-15 05:59:01.931259 | orchestrator | TASK [Set cluster configs] ***************************************************** 2026-02-15 05:59:01.931270 | orchestrator | Sunday 15 February 2026 05:58:24 +0000 (0:00:01.213) 0:05:02.812 ******* 2026-02-15 05:59:01.931283 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-15 05:59:01.931297 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-15 05:59:01.931309 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-15 05:59:01.931321 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-15 05:59:01.931334 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-15 05:59:01.931346 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}])  2026-02-15 05:59:01.931358 | orchestrator | 2026-02-15 05:59:01.931369 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-15 05:59:01.931407 | orchestrator | 2026-02-15 05:59:01.931418 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-15 05:59:01.931429 | orchestrator | Sunday 15 February 2026 05:58:34 +0000 (0:00:10.286) 0:05:13.099 ******* 2026-02-15 05:59:01.931440 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931451 | orchestrator | 2026-02-15 05:59:01.931462 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-15 05:59:01.931472 | orchestrator | Sunday 15 February 2026 05:58:36 +0000 (0:00:01.478) 0:05:14.578 ******* 2026-02-15 05:59:01.931483 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931494 | orchestrator | 2026-02-15 05:59:01.931504 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-15 05:59:01.931530 | orchestrator | Sunday 15 February 2026 05:58:37 +0000 (0:00:01.131) 0:05:15.709 ******* 2026-02-15 05:59:01.931542 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:01.931556 | orchestrator | 2026-02-15 05:59:01.931570 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-15 05:59:01.931583 | orchestrator | Sunday 15 February 2026 05:58:38 +0000 (0:00:01.203) 0:05:16.913 ******* 2026-02-15 05:59:01.931596 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931608 | orchestrator | 2026-02-15 05:59:01.931620 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 05:59:01.931633 | orchestrator | Sunday 15 February 2026 05:58:39 +0000 (0:00:01.134) 0:05:18.047 ******* 2026-02-15 05:59:01.931646 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-15 05:59:01.931659 | orchestrator | 2026-02-15 05:59:01.931671 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 05:59:01.931702 | orchestrator | Sunday 15 February 2026 05:58:41 +0000 (0:00:01.178) 0:05:19.226 ******* 2026-02-15 05:59:01.931715 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931728 | orchestrator | 2026-02-15 05:59:01.931741 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 05:59:01.931754 | orchestrator | Sunday 15 February 2026 05:58:42 +0000 (0:00:01.473) 0:05:20.700 ******* 2026-02-15 05:59:01.931767 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931780 | orchestrator | 2026-02-15 05:59:01.931792 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 05:59:01.931804 | orchestrator | Sunday 15 February 2026 05:58:43 +0000 (0:00:01.178) 0:05:21.879 ******* 2026-02-15 05:59:01.931817 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931829 | orchestrator | 2026-02-15 05:59:01.931843 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 05:59:01.931855 | orchestrator | Sunday 15 February 2026 05:58:45 +0000 (0:00:01.496) 0:05:23.376 ******* 2026-02-15 05:59:01.931867 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931880 | orchestrator | 2026-02-15 05:59:01.931892 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 05:59:01.931905 | orchestrator | Sunday 15 February 2026 05:58:46 +0000 (0:00:01.138) 0:05:24.514 ******* 2026-02-15 05:59:01.931918 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931930 | orchestrator | 2026-02-15 05:59:01.931940 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 05:59:01.931951 | orchestrator | Sunday 15 February 2026 05:58:47 +0000 (0:00:01.170) 0:05:25.684 ******* 2026-02-15 05:59:01.931962 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.931990 | orchestrator | 2026-02-15 05:59:01.932002 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 05:59:01.932013 | orchestrator | Sunday 15 February 2026 05:58:48 +0000 (0:00:01.193) 0:05:26.878 ******* 2026-02-15 05:59:01.932024 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:01.932035 | orchestrator | 2026-02-15 05:59:01.932045 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 05:59:01.932056 | orchestrator | Sunday 15 February 2026 05:58:49 +0000 (0:00:01.128) 0:05:28.006 ******* 2026-02-15 05:59:01.932066 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.932088 | orchestrator | 2026-02-15 05:59:01.932099 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 05:59:01.932110 | orchestrator | Sunday 15 February 2026 05:58:51 +0000 (0:00:01.127) 0:05:29.134 ******* 2026-02-15 05:59:01.932121 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:59:01.932132 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 05:59:01.932142 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 05:59:01.932153 | orchestrator | 2026-02-15 05:59:01.932164 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 05:59:01.932174 | orchestrator | Sunday 15 February 2026 05:58:52 +0000 (0:00:01.660) 0:05:30.795 ******* 2026-02-15 05:59:01.932185 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:01.932195 | orchestrator | 2026-02-15 05:59:01.932206 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 05:59:01.932216 | orchestrator | Sunday 15 February 2026 05:58:53 +0000 (0:00:01.259) 0:05:32.054 ******* 2026-02-15 05:59:01.932227 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:59:01.932237 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 05:59:01.932248 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 05:59:01.932259 | orchestrator | 2026-02-15 05:59:01.932269 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 05:59:01.932280 | orchestrator | Sunday 15 February 2026 05:58:57 +0000 (0:00:03.250) 0:05:35.305 ******* 2026-02-15 05:59:01.932290 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 05:59:01.932301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 05:59:01.932312 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 05:59:01.932322 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:01.932333 | orchestrator | 2026-02-15 05:59:01.932344 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 05:59:01.932354 | orchestrator | Sunday 15 February 2026 05:58:58 +0000 (0:00:01.484) 0:05:36.789 ******* 2026-02-15 05:59:01.932367 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 05:59:01.932385 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 05:59:01.932397 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 05:59:01.932408 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:01.932419 | orchestrator | 2026-02-15 05:59:01.932429 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 05:59:01.932440 | orchestrator | Sunday 15 February 2026 05:59:00 +0000 (0:00:02.018) 0:05:38.807 ******* 2026-02-15 05:59:01.932459 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:22.163935 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:22.164140 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:22.164159 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164172 | orchestrator | 2026-02-15 05:59:22.164183 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 05:59:22.164195 | orchestrator | Sunday 15 February 2026 05:59:01 +0000 (0:00:01.214) 0:05:40.022 ******* 2026-02-15 05:59:22.164207 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'e40f30e87190', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 05:58:54.526536', 'end': '2026-02-15 05:58:54.577652', 'delta': '0:00:00.051116', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e40f30e87190'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 05:59:22.164220 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '3aeb4857506c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 05:58:55.138894', 'end': '2026-02-15 05:58:55.189329', 'delta': '0:00:00.050435', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3aeb4857506c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 05:59:22.164245 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '9cffadff9441', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 05:58:55.976158', 'end': '2026-02-15 05:58:56.030485', 'delta': '0:00:00.054327', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9cffadff9441'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 05:59:22.164256 | orchestrator | 2026-02-15 05:59:22.164266 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 05:59:22.164276 | orchestrator | Sunday 15 February 2026 05:59:03 +0000 (0:00:01.301) 0:05:41.324 ******* 2026-02-15 05:59:22.164285 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:22.164296 | orchestrator | 2026-02-15 05:59:22.164306 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 05:59:22.164315 | orchestrator | Sunday 15 February 2026 05:59:04 +0000 (0:00:01.621) 0:05:42.945 ******* 2026-02-15 05:59:22.164325 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164334 | orchestrator | 2026-02-15 05:59:22.164344 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 05:59:22.164373 | orchestrator | Sunday 15 February 2026 05:59:06 +0000 (0:00:01.248) 0:05:44.194 ******* 2026-02-15 05:59:22.164383 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:22.164392 | orchestrator | 2026-02-15 05:59:22.164402 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 05:59:22.164412 | orchestrator | Sunday 15 February 2026 05:59:07 +0000 (0:00:01.129) 0:05:45.323 ******* 2026-02-15 05:59:22.164437 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-15 05:59:22.164450 | orchestrator | 2026-02-15 05:59:22.164462 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 05:59:22.164473 | orchestrator | Sunday 15 February 2026 05:59:09 +0000 (0:00:02.074) 0:05:47.398 ******* 2026-02-15 05:59:22.164483 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:22.164494 | orchestrator | 2026-02-15 05:59:22.164506 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 05:59:22.164516 | orchestrator | Sunday 15 February 2026 05:59:10 +0000 (0:00:01.146) 0:05:48.544 ******* 2026-02-15 05:59:22.164527 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164538 | orchestrator | 2026-02-15 05:59:22.164550 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 05:59:22.164561 | orchestrator | Sunday 15 February 2026 05:59:11 +0000 (0:00:01.130) 0:05:49.675 ******* 2026-02-15 05:59:22.164572 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164583 | orchestrator | 2026-02-15 05:59:22.164596 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 05:59:22.164612 | orchestrator | Sunday 15 February 2026 05:59:12 +0000 (0:00:01.251) 0:05:50.926 ******* 2026-02-15 05:59:22.164629 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164647 | orchestrator | 2026-02-15 05:59:22.164663 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 05:59:22.164680 | orchestrator | Sunday 15 February 2026 05:59:13 +0000 (0:00:01.151) 0:05:52.077 ******* 2026-02-15 05:59:22.164698 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164712 | orchestrator | 2026-02-15 05:59:22.164724 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 05:59:22.164735 | orchestrator | Sunday 15 February 2026 05:59:15 +0000 (0:00:01.126) 0:05:53.204 ******* 2026-02-15 05:59:22.164746 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164758 | orchestrator | 2026-02-15 05:59:22.164769 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 05:59:22.164780 | orchestrator | Sunday 15 February 2026 05:59:16 +0000 (0:00:01.120) 0:05:54.325 ******* 2026-02-15 05:59:22.164791 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164803 | orchestrator | 2026-02-15 05:59:22.164814 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 05:59:22.164825 | orchestrator | Sunday 15 February 2026 05:59:17 +0000 (0:00:01.213) 0:05:55.539 ******* 2026-02-15 05:59:22.164835 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164844 | orchestrator | 2026-02-15 05:59:22.164853 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 05:59:22.164863 | orchestrator | Sunday 15 February 2026 05:59:18 +0000 (0:00:01.129) 0:05:56.668 ******* 2026-02-15 05:59:22.164872 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164882 | orchestrator | 2026-02-15 05:59:22.164892 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 05:59:22.164902 | orchestrator | Sunday 15 February 2026 05:59:19 +0000 (0:00:01.131) 0:05:57.800 ******* 2026-02-15 05:59:22.164911 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:22.164921 | orchestrator | 2026-02-15 05:59:22.164930 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 05:59:22.164940 | orchestrator | Sunday 15 February 2026 05:59:20 +0000 (0:00:01.194) 0:05:58.994 ******* 2026-02-15 05:59:22.164950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:59:22.164968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:59:22.164983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:59:22.165034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 05:59:22.165068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:59:23.408281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:59:23.408386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:59:23.408429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37951a5f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 05:59:23.408470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:59:23.408483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 05:59:23.408495 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:23.408508 | orchestrator | 2026-02-15 05:59:23.408521 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 05:59:23.408533 | orchestrator | Sunday 15 February 2026 05:59:22 +0000 (0:00:01.254) 0:06:00.249 ******* 2026-02-15 05:59:23.408566 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:23.408580 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:23.408592 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:23.408612 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:23.408630 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:23.408641 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:23.408662 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:47.998787 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37951a5f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:47.998994 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:47.999115 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 05:59:47.999131 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:47.999145 | orchestrator | 2026-02-15 05:59:47.999157 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 05:59:47.999169 | orchestrator | Sunday 15 February 2026 05:59:23 +0000 (0:00:01.256) 0:06:01.505 ******* 2026-02-15 05:59:47.999180 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:47.999192 | orchestrator | 2026-02-15 05:59:47.999203 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 05:59:47.999213 | orchestrator | Sunday 15 February 2026 05:59:24 +0000 (0:00:01.590) 0:06:03.096 ******* 2026-02-15 05:59:47.999224 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:47.999235 | orchestrator | 2026-02-15 05:59:47.999245 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 05:59:47.999277 | orchestrator | Sunday 15 February 2026 05:59:26 +0000 (0:00:01.139) 0:06:04.236 ******* 2026-02-15 05:59:47.999288 | orchestrator | ok: [testbed-node-0] 2026-02-15 05:59:47.999299 | orchestrator | 2026-02-15 05:59:47.999309 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 05:59:47.999325 | orchestrator | Sunday 15 February 2026 05:59:27 +0000 (0:00:01.490) 0:06:05.726 ******* 2026-02-15 05:59:47.999345 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:47.999363 | orchestrator | 2026-02-15 05:59:47.999382 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 05:59:47.999416 | orchestrator | Sunday 15 February 2026 05:59:28 +0000 (0:00:01.210) 0:06:06.937 ******* 2026-02-15 05:59:47.999434 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:47.999452 | orchestrator | 2026-02-15 05:59:47.999468 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 05:59:47.999479 | orchestrator | Sunday 15 February 2026 05:59:30 +0000 (0:00:01.327) 0:06:08.264 ******* 2026-02-15 05:59:47.999490 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:47.999500 | orchestrator | 2026-02-15 05:59:47.999511 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 05:59:47.999521 | orchestrator | Sunday 15 February 2026 05:59:31 +0000 (0:00:01.190) 0:06:09.455 ******* 2026-02-15 05:59:47.999532 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:59:47.999543 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-15 05:59:47.999553 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-15 05:59:47.999564 | orchestrator | 2026-02-15 05:59:47.999575 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 05:59:47.999585 | orchestrator | Sunday 15 February 2026 05:59:33 +0000 (0:00:02.219) 0:06:11.675 ******* 2026-02-15 05:59:47.999596 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 05:59:47.999607 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 05:59:47.999617 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 05:59:47.999628 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:47.999639 | orchestrator | 2026-02-15 05:59:47.999649 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 05:59:47.999660 | orchestrator | Sunday 15 February 2026 05:59:34 +0000 (0:00:01.150) 0:06:12.825 ******* 2026-02-15 05:59:47.999671 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:47.999682 | orchestrator | 2026-02-15 05:59:47.999692 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 05:59:47.999703 | orchestrator | Sunday 15 February 2026 05:59:35 +0000 (0:00:01.175) 0:06:14.001 ******* 2026-02-15 05:59:47.999713 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:59:47.999724 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 05:59:47.999736 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 05:59:47.999746 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 05:59:47.999757 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 05:59:47.999768 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 05:59:47.999778 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 05:59:47.999789 | orchestrator | 2026-02-15 05:59:47.999799 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 05:59:47.999817 | orchestrator | Sunday 15 February 2026 05:59:38 +0000 (0:00:02.174) 0:06:16.175 ******* 2026-02-15 05:59:47.999828 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 05:59:47.999839 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 05:59:47.999850 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 05:59:47.999860 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 05:59:47.999871 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 05:59:47.999881 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 05:59:47.999892 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 05:59:47.999903 | orchestrator | 2026-02-15 05:59:47.999920 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-15 05:59:47.999931 | orchestrator | Sunday 15 February 2026 05:59:41 +0000 (0:00:02.986) 0:06:19.162 ******* 2026-02-15 05:59:47.999941 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-15 05:59:47.999952 | orchestrator | 2026-02-15 05:59:47.999962 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-15 05:59:47.999973 | orchestrator | Sunday 15 February 2026 05:59:43 +0000 (0:00:02.314) 0:06:21.476 ******* 2026-02-15 05:59:47.999983 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:47.999994 | orchestrator | 2026-02-15 05:59:48.000005 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-15 05:59:48.000016 | orchestrator | Sunday 15 February 2026 05:59:44 +0000 (0:00:01.196) 0:06:22.673 ******* 2026-02-15 05:59:48.000062 | orchestrator | skipping: [testbed-node-0] 2026-02-15 05:59:48.000077 | orchestrator | 2026-02-15 05:59:48.000088 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-15 05:59:48.000099 | orchestrator | Sunday 15 February 2026 05:59:45 +0000 (0:00:01.151) 0:06:23.825 ******* 2026-02-15 05:59:48.000109 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-15 05:59:48.000120 | orchestrator | 2026-02-15 05:59:48.000131 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-15 05:59:48.000150 | orchestrator | Sunday 15 February 2026 05:59:47 +0000 (0:00:02.259) 0:06:26.084 ******* 2026-02-15 06:00:49.217063 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.217214 | orchestrator | 2026-02-15 06:00:49.217231 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-15 06:00:49.217244 | orchestrator | Sunday 15 February 2026 05:59:49 +0000 (0:00:01.154) 0:06:27.238 ******* 2026-02-15 06:00:49.217256 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:00:49.217267 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:00:49.217279 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:00:49.217290 | orchestrator | 2026-02-15 06:00:49.217301 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-15 06:00:49.217312 | orchestrator | Sunday 15 February 2026 05:59:51 +0000 (0:00:02.475) 0:06:29.714 ******* 2026-02-15 06:00:49.217323 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd', 'testbed-node-0']) 2026-02-15 06:00:49.217333 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd', 'testbed-node-1']) 2026-02-15 06:00:49.217345 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd', 'testbed-node-2']) 2026-02-15 06:00:49.217356 | orchestrator | ok: [testbed-node-0] => (item=['bootstrap-rbd-mirror', 'testbed-node-0']) 2026-02-15 06:00:49.217367 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=['bootstrap-rbd-mirror', 'testbed-node-1']) 2026-02-15 06:00:49.217378 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=['bootstrap-rbd-mirror', 'testbed-node-2']) 2026-02-15 06:00:49.217389 | orchestrator | 2026-02-15 06:00:49.217400 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-15 06:00:49.217411 | orchestrator | Sunday 15 February 2026 06:00:04 +0000 (0:00:13.191) 0:06:42.905 ******* 2026-02-15 06:00:49.217423 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:00:49.217435 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:00:49.217446 | orchestrator | 2026-02-15 06:00:49.217457 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-15 06:00:49.217468 | orchestrator | Sunday 15 February 2026 06:00:08 +0000 (0:00:03.665) 0:06:46.570 ******* 2026-02-15 06:00:49.217479 | orchestrator | changed: [testbed-node-0] 2026-02-15 06:00:49.217490 | orchestrator | 2026-02-15 06:00:49.217501 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:00:49.217537 | orchestrator | Sunday 15 February 2026 06:00:10 +0000 (0:00:02.418) 0:06:48.989 ******* 2026-02-15 06:00:49.217548 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-15 06:00:49.217560 | orchestrator | 2026-02-15 06:00:49.217570 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:00:49.217581 | orchestrator | Sunday 15 February 2026 06:00:12 +0000 (0:00:01.424) 0:06:50.413 ******* 2026-02-15 06:00:49.217595 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-15 06:00:49.217607 | orchestrator | 2026-02-15 06:00:49.217619 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:00:49.217631 | orchestrator | Sunday 15 February 2026 06:00:13 +0000 (0:00:01.471) 0:06:51.885 ******* 2026-02-15 06:00:49.217659 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:00:49.217672 | orchestrator | 2026-02-15 06:00:49.217685 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:00:49.217698 | orchestrator | Sunday 15 February 2026 06:00:15 +0000 (0:00:01.571) 0:06:53.457 ******* 2026-02-15 06:00:49.217710 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.217723 | orchestrator | 2026-02-15 06:00:49.217736 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:00:49.217750 | orchestrator | Sunday 15 February 2026 06:00:16 +0000 (0:00:01.138) 0:06:54.596 ******* 2026-02-15 06:00:49.217762 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.217774 | orchestrator | 2026-02-15 06:00:49.217786 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:00:49.217799 | orchestrator | Sunday 15 February 2026 06:00:17 +0000 (0:00:01.206) 0:06:55.802 ******* 2026-02-15 06:00:49.217811 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.217823 | orchestrator | 2026-02-15 06:00:49.217836 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:00:49.217848 | orchestrator | Sunday 15 February 2026 06:00:18 +0000 (0:00:01.174) 0:06:56.977 ******* 2026-02-15 06:00:49.217860 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:00:49.217872 | orchestrator | 2026-02-15 06:00:49.217885 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:00:49.217897 | orchestrator | Sunday 15 February 2026 06:00:20 +0000 (0:00:01.579) 0:06:58.556 ******* 2026-02-15 06:00:49.217909 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.217922 | orchestrator | 2026-02-15 06:00:49.217935 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:00:49.217947 | orchestrator | Sunday 15 February 2026 06:00:21 +0000 (0:00:01.131) 0:06:59.687 ******* 2026-02-15 06:00:49.217958 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.217969 | orchestrator | 2026-02-15 06:00:49.217980 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:00:49.217990 | orchestrator | Sunday 15 February 2026 06:00:22 +0000 (0:00:01.122) 0:07:00.810 ******* 2026-02-15 06:00:49.218001 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:00:49.218012 | orchestrator | 2026-02-15 06:00:49.218077 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:00:49.218107 | orchestrator | Sunday 15 February 2026 06:00:24 +0000 (0:00:01.596) 0:07:02.407 ******* 2026-02-15 06:00:49.218118 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:00:49.218129 | orchestrator | 2026-02-15 06:00:49.218158 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:00:49.218170 | orchestrator | Sunday 15 February 2026 06:00:25 +0000 (0:00:01.623) 0:07:04.030 ******* 2026-02-15 06:00:49.218181 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218192 | orchestrator | 2026-02-15 06:00:49.218217 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:00:49.218228 | orchestrator | Sunday 15 February 2026 06:00:27 +0000 (0:00:01.157) 0:07:05.188 ******* 2026-02-15 06:00:49.218239 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:00:49.218271 | orchestrator | 2026-02-15 06:00:49.218282 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:00:49.218293 | orchestrator | Sunday 15 February 2026 06:00:28 +0000 (0:00:01.311) 0:07:06.500 ******* 2026-02-15 06:00:49.218304 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218315 | orchestrator | 2026-02-15 06:00:49.218325 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:00:49.218336 | orchestrator | Sunday 15 February 2026 06:00:29 +0000 (0:00:01.146) 0:07:07.646 ******* 2026-02-15 06:00:49.218347 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218358 | orchestrator | 2026-02-15 06:00:49.218368 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:00:49.218379 | orchestrator | Sunday 15 February 2026 06:00:30 +0000 (0:00:01.142) 0:07:08.789 ******* 2026-02-15 06:00:49.218390 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218400 | orchestrator | 2026-02-15 06:00:49.218411 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:00:49.218422 | orchestrator | Sunday 15 February 2026 06:00:31 +0000 (0:00:01.162) 0:07:09.952 ******* 2026-02-15 06:00:49.218432 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218443 | orchestrator | 2026-02-15 06:00:49.218454 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:00:49.218464 | orchestrator | Sunday 15 February 2026 06:00:32 +0000 (0:00:01.119) 0:07:11.072 ******* 2026-02-15 06:00:49.218475 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218486 | orchestrator | 2026-02-15 06:00:49.218496 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:00:49.218507 | orchestrator | Sunday 15 February 2026 06:00:34 +0000 (0:00:01.170) 0:07:12.242 ******* 2026-02-15 06:00:49.218518 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:00:49.218528 | orchestrator | 2026-02-15 06:00:49.218539 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:00:49.218550 | orchestrator | Sunday 15 February 2026 06:00:35 +0000 (0:00:01.185) 0:07:13.428 ******* 2026-02-15 06:00:49.218560 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:00:49.218571 | orchestrator | 2026-02-15 06:00:49.218582 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:00:49.218593 | orchestrator | Sunday 15 February 2026 06:00:36 +0000 (0:00:01.149) 0:07:14.578 ******* 2026-02-15 06:00:49.218604 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:00:49.218614 | orchestrator | 2026-02-15 06:00:49.218625 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:00:49.218636 | orchestrator | Sunday 15 February 2026 06:00:37 +0000 (0:00:01.189) 0:07:15.767 ******* 2026-02-15 06:00:49.218646 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218657 | orchestrator | 2026-02-15 06:00:49.218668 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:00:49.218678 | orchestrator | Sunday 15 February 2026 06:00:38 +0000 (0:00:01.196) 0:07:16.964 ******* 2026-02-15 06:00:49.218689 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218700 | orchestrator | 2026-02-15 06:00:49.218711 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:00:49.218727 | orchestrator | Sunday 15 February 2026 06:00:39 +0000 (0:00:01.125) 0:07:18.090 ******* 2026-02-15 06:00:49.218738 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218749 | orchestrator | 2026-02-15 06:00:49.218760 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:00:49.218771 | orchestrator | Sunday 15 February 2026 06:00:41 +0000 (0:00:01.141) 0:07:19.232 ******* 2026-02-15 06:00:49.218782 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218792 | orchestrator | 2026-02-15 06:00:49.218803 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:00:49.218814 | orchestrator | Sunday 15 February 2026 06:00:42 +0000 (0:00:01.114) 0:07:20.346 ******* 2026-02-15 06:00:49.218824 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218842 | orchestrator | 2026-02-15 06:00:49.218853 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:00:49.218864 | orchestrator | Sunday 15 February 2026 06:00:43 +0000 (0:00:01.163) 0:07:21.510 ******* 2026-02-15 06:00:49.218875 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218886 | orchestrator | 2026-02-15 06:00:49.218897 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:00:49.218908 | orchestrator | Sunday 15 February 2026 06:00:44 +0000 (0:00:01.178) 0:07:22.688 ******* 2026-02-15 06:00:49.218918 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218929 | orchestrator | 2026-02-15 06:00:49.218940 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:00:49.218951 | orchestrator | Sunday 15 February 2026 06:00:45 +0000 (0:00:01.176) 0:07:23.865 ******* 2026-02-15 06:00:49.218962 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.218973 | orchestrator | 2026-02-15 06:00:49.218984 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:00:49.218995 | orchestrator | Sunday 15 February 2026 06:00:46 +0000 (0:00:01.136) 0:07:25.001 ******* 2026-02-15 06:00:49.219006 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.219017 | orchestrator | 2026-02-15 06:00:49.219028 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:00:49.219039 | orchestrator | Sunday 15 February 2026 06:00:48 +0000 (0:00:01.159) 0:07:26.160 ******* 2026-02-15 06:00:49.219049 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:00:49.219060 | orchestrator | 2026-02-15 06:00:49.219071 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:00:49.219122 | orchestrator | Sunday 15 February 2026 06:00:49 +0000 (0:00:01.145) 0:07:27.306 ******* 2026-02-15 06:01:41.349756 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.349873 | orchestrator | 2026-02-15 06:01:41.349890 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:01:41.349904 | orchestrator | Sunday 15 February 2026 06:00:50 +0000 (0:00:01.149) 0:07:28.455 ******* 2026-02-15 06:01:41.349916 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.349928 | orchestrator | 2026-02-15 06:01:41.349939 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:01:41.349950 | orchestrator | Sunday 15 February 2026 06:00:51 +0000 (0:00:01.147) 0:07:29.602 ******* 2026-02-15 06:01:41.349961 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:01:41.349973 | orchestrator | 2026-02-15 06:01:41.349984 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:01:41.349995 | orchestrator | Sunday 15 February 2026 06:00:53 +0000 (0:00:02.030) 0:07:31.633 ******* 2026-02-15 06:01:41.350006 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:01:41.350070 | orchestrator | 2026-02-15 06:01:41.350084 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:01:41.350095 | orchestrator | Sunday 15 February 2026 06:00:55 +0000 (0:00:02.426) 0:07:34.060 ******* 2026-02-15 06:01:41.350106 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-15 06:01:41.350118 | orchestrator | 2026-02-15 06:01:41.350129 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:01:41.350205 | orchestrator | Sunday 15 February 2026 06:00:57 +0000 (0:00:01.521) 0:07:35.581 ******* 2026-02-15 06:01:41.350224 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.350241 | orchestrator | 2026-02-15 06:01:41.350252 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:01:41.350263 | orchestrator | Sunday 15 February 2026 06:00:58 +0000 (0:00:01.173) 0:07:36.755 ******* 2026-02-15 06:01:41.350274 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.350285 | orchestrator | 2026-02-15 06:01:41.350299 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:01:41.350312 | orchestrator | Sunday 15 February 2026 06:00:59 +0000 (0:00:01.168) 0:07:37.923 ******* 2026-02-15 06:01:41.350366 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:01:41.350385 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:01:41.350405 | orchestrator | 2026-02-15 06:01:41.350423 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:01:41.350441 | orchestrator | Sunday 15 February 2026 06:01:01 +0000 (0:00:01.843) 0:07:39.767 ******* 2026-02-15 06:01:41.350456 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:01:41.350474 | orchestrator | 2026-02-15 06:01:41.350494 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:01:41.350512 | orchestrator | Sunday 15 February 2026 06:01:03 +0000 (0:00:01.663) 0:07:41.431 ******* 2026-02-15 06:01:41.350530 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.350551 | orchestrator | 2026-02-15 06:01:41.350565 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:01:41.350578 | orchestrator | Sunday 15 February 2026 06:01:04 +0000 (0:00:01.148) 0:07:42.579 ******* 2026-02-15 06:01:41.350590 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.350603 | orchestrator | 2026-02-15 06:01:41.350616 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:01:41.350629 | orchestrator | Sunday 15 February 2026 06:01:05 +0000 (0:00:01.162) 0:07:43.742 ******* 2026-02-15 06:01:41.350656 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.350668 | orchestrator | 2026-02-15 06:01:41.350679 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:01:41.350690 | orchestrator | Sunday 15 February 2026 06:01:06 +0000 (0:00:01.117) 0:07:44.859 ******* 2026-02-15 06:01:41.350701 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-15 06:01:41.350712 | orchestrator | 2026-02-15 06:01:41.350723 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:01:41.350733 | orchestrator | Sunday 15 February 2026 06:01:08 +0000 (0:00:01.525) 0:07:46.384 ******* 2026-02-15 06:01:41.350744 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:01:41.350755 | orchestrator | 2026-02-15 06:01:41.350766 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:01:41.350777 | orchestrator | Sunday 15 February 2026 06:01:10 +0000 (0:00:01.753) 0:07:48.137 ******* 2026-02-15 06:01:41.350788 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:01:41.350799 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:01:41.350809 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:01:41.350820 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.350831 | orchestrator | 2026-02-15 06:01:41.350841 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:01:41.350852 | orchestrator | Sunday 15 February 2026 06:01:11 +0000 (0:00:01.199) 0:07:49.337 ******* 2026-02-15 06:01:41.350863 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.350873 | orchestrator | 2026-02-15 06:01:41.350884 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:01:41.350895 | orchestrator | Sunday 15 February 2026 06:01:12 +0000 (0:00:01.138) 0:07:50.475 ******* 2026-02-15 06:01:41.350906 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.350916 | orchestrator | 2026-02-15 06:01:41.350927 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:01:41.350938 | orchestrator | Sunday 15 February 2026 06:01:13 +0000 (0:00:01.193) 0:07:51.668 ******* 2026-02-15 06:01:41.350948 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.350959 | orchestrator | 2026-02-15 06:01:41.350970 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:01:41.351001 | orchestrator | Sunday 15 February 2026 06:01:14 +0000 (0:00:01.176) 0:07:52.846 ******* 2026-02-15 06:01:41.351026 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.351037 | orchestrator | 2026-02-15 06:01:41.351048 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:01:41.351058 | orchestrator | Sunday 15 February 2026 06:01:15 +0000 (0:00:01.193) 0:07:54.040 ******* 2026-02-15 06:01:41.351069 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.351080 | orchestrator | 2026-02-15 06:01:41.351091 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:01:41.351102 | orchestrator | Sunday 15 February 2026 06:01:17 +0000 (0:00:01.144) 0:07:55.184 ******* 2026-02-15 06:01:41.351113 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:01:41.351124 | orchestrator | 2026-02-15 06:01:41.351182 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:01:41.351196 | orchestrator | Sunday 15 February 2026 06:01:19 +0000 (0:00:02.559) 0:07:57.744 ******* 2026-02-15 06:01:41.351207 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:01:41.351218 | orchestrator | 2026-02-15 06:01:41.351229 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:01:41.351239 | orchestrator | Sunday 15 February 2026 06:01:20 +0000 (0:00:01.169) 0:07:58.913 ******* 2026-02-15 06:01:41.351250 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-15 06:01:41.351261 | orchestrator | 2026-02-15 06:01:41.351272 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:01:41.351295 | orchestrator | Sunday 15 February 2026 06:01:22 +0000 (0:00:01.482) 0:08:00.396 ******* 2026-02-15 06:01:41.351307 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.351329 | orchestrator | 2026-02-15 06:01:41.351340 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:01:41.351351 | orchestrator | Sunday 15 February 2026 06:01:23 +0000 (0:00:01.216) 0:08:01.613 ******* 2026-02-15 06:01:41.351362 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.351373 | orchestrator | 2026-02-15 06:01:41.351383 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:01:41.351394 | orchestrator | Sunday 15 February 2026 06:01:24 +0000 (0:00:01.192) 0:08:02.806 ******* 2026-02-15 06:01:41.351405 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.351416 | orchestrator | 2026-02-15 06:01:41.351426 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:01:41.351437 | orchestrator | Sunday 15 February 2026 06:01:25 +0000 (0:00:01.179) 0:08:03.985 ******* 2026-02-15 06:01:41.351448 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.351459 | orchestrator | 2026-02-15 06:01:41.351469 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:01:41.351480 | orchestrator | Sunday 15 February 2026 06:01:27 +0000 (0:00:01.123) 0:08:05.109 ******* 2026-02-15 06:01:41.351491 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.351502 | orchestrator | 2026-02-15 06:01:41.351513 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:01:41.351524 | orchestrator | Sunday 15 February 2026 06:01:28 +0000 (0:00:01.145) 0:08:06.255 ******* 2026-02-15 06:01:41.351535 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.351546 | orchestrator | 2026-02-15 06:01:41.351557 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:01:41.351568 | orchestrator | Sunday 15 February 2026 06:01:29 +0000 (0:00:01.127) 0:08:07.382 ******* 2026-02-15 06:01:41.351578 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.351589 | orchestrator | 2026-02-15 06:01:41.351600 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:01:41.351618 | orchestrator | Sunday 15 February 2026 06:01:30 +0000 (0:00:01.220) 0:08:08.603 ******* 2026-02-15 06:01:41.351629 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:01:41.351642 | orchestrator | 2026-02-15 06:01:41.351661 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:01:41.351679 | orchestrator | Sunday 15 February 2026 06:01:31 +0000 (0:00:01.162) 0:08:09.765 ******* 2026-02-15 06:01:41.351707 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:01:41.351724 | orchestrator | 2026-02-15 06:01:41.351738 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:01:41.351756 | orchestrator | Sunday 15 February 2026 06:01:32 +0000 (0:00:01.161) 0:08:10.927 ******* 2026-02-15 06:01:41.351774 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-15 06:01:41.351792 | orchestrator | 2026-02-15 06:01:41.351811 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:01:41.351822 | orchestrator | Sunday 15 February 2026 06:01:34 +0000 (0:00:01.571) 0:08:12.498 ******* 2026-02-15 06:01:41.351833 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-15 06:01:41.351844 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-15 06:01:41.351855 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-15 06:01:41.351865 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-15 06:01:41.351876 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-15 06:01:41.351887 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-15 06:01:41.351897 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-15 06:01:41.351908 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:01:41.351919 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:01:41.351930 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:01:41.351941 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:01:41.351951 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:01:41.351962 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:01:41.351973 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:01:41.351993 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-15 06:02:29.667316 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-15 06:02:29.667399 | orchestrator | 2026-02-15 06:02:29.667407 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:02:29.667413 | orchestrator | Sunday 15 February 2026 06:01:41 +0000 (0:00:06.931) 0:08:19.430 ******* 2026-02-15 06:02:29.667418 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667423 | orchestrator | 2026-02-15 06:02:29.667427 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:02:29.667432 | orchestrator | Sunday 15 February 2026 06:01:42 +0000 (0:00:01.147) 0:08:20.578 ******* 2026-02-15 06:02:29.667436 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667440 | orchestrator | 2026-02-15 06:02:29.667444 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:02:29.667449 | orchestrator | Sunday 15 February 2026 06:01:43 +0000 (0:00:01.197) 0:08:21.775 ******* 2026-02-15 06:02:29.667453 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667457 | orchestrator | 2026-02-15 06:02:29.667461 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:02:29.667465 | orchestrator | Sunday 15 February 2026 06:01:44 +0000 (0:00:01.132) 0:08:22.908 ******* 2026-02-15 06:02:29.667469 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667473 | orchestrator | 2026-02-15 06:02:29.667477 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:02:29.667481 | orchestrator | Sunday 15 February 2026 06:01:45 +0000 (0:00:01.112) 0:08:24.021 ******* 2026-02-15 06:02:29.667486 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667490 | orchestrator | 2026-02-15 06:02:29.667494 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:02:29.667498 | orchestrator | Sunday 15 February 2026 06:01:47 +0000 (0:00:01.149) 0:08:25.171 ******* 2026-02-15 06:02:29.667502 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667524 | orchestrator | 2026-02-15 06:02:29.667528 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:02:29.667533 | orchestrator | Sunday 15 February 2026 06:01:48 +0000 (0:00:01.219) 0:08:26.390 ******* 2026-02-15 06:02:29.667537 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667541 | orchestrator | 2026-02-15 06:02:29.667545 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:02:29.667549 | orchestrator | Sunday 15 February 2026 06:01:49 +0000 (0:00:01.125) 0:08:27.515 ******* 2026-02-15 06:02:29.667553 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667557 | orchestrator | 2026-02-15 06:02:29.667561 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:02:29.667565 | orchestrator | Sunday 15 February 2026 06:01:50 +0000 (0:00:01.213) 0:08:28.730 ******* 2026-02-15 06:02:29.667569 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667573 | orchestrator | 2026-02-15 06:02:29.667577 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:02:29.667581 | orchestrator | Sunday 15 February 2026 06:01:51 +0000 (0:00:01.127) 0:08:29.857 ******* 2026-02-15 06:02:29.667585 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667589 | orchestrator | 2026-02-15 06:02:29.667593 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:02:29.667597 | orchestrator | Sunday 15 February 2026 06:01:52 +0000 (0:00:01.122) 0:08:30.980 ******* 2026-02-15 06:02:29.667601 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667605 | orchestrator | 2026-02-15 06:02:29.667619 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:02:29.667623 | orchestrator | Sunday 15 February 2026 06:01:54 +0000 (0:00:01.175) 0:08:32.155 ******* 2026-02-15 06:02:29.667627 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667631 | orchestrator | 2026-02-15 06:02:29.667635 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:02:29.667639 | orchestrator | Sunday 15 February 2026 06:01:55 +0000 (0:00:01.173) 0:08:33.329 ******* 2026-02-15 06:02:29.667644 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667648 | orchestrator | 2026-02-15 06:02:29.667652 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:02:29.667656 | orchestrator | Sunday 15 February 2026 06:01:56 +0000 (0:00:01.212) 0:08:34.541 ******* 2026-02-15 06:02:29.667660 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667664 | orchestrator | 2026-02-15 06:02:29.667668 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:02:29.667672 | orchestrator | Sunday 15 February 2026 06:01:57 +0000 (0:00:01.150) 0:08:35.692 ******* 2026-02-15 06:02:29.667676 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667680 | orchestrator | 2026-02-15 06:02:29.667684 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:02:29.667688 | orchestrator | Sunday 15 February 2026 06:01:58 +0000 (0:00:01.260) 0:08:36.952 ******* 2026-02-15 06:02:29.667692 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667696 | orchestrator | 2026-02-15 06:02:29.667700 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:02:29.667704 | orchestrator | Sunday 15 February 2026 06:01:59 +0000 (0:00:01.124) 0:08:38.077 ******* 2026-02-15 06:02:29.667708 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667712 | orchestrator | 2026-02-15 06:02:29.667717 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:02:29.667722 | orchestrator | Sunday 15 February 2026 06:02:01 +0000 (0:00:01.138) 0:08:39.215 ******* 2026-02-15 06:02:29.667726 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667730 | orchestrator | 2026-02-15 06:02:29.667734 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:02:29.667743 | orchestrator | Sunday 15 February 2026 06:02:02 +0000 (0:00:01.174) 0:08:40.390 ******* 2026-02-15 06:02:29.667747 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667751 | orchestrator | 2026-02-15 06:02:29.667765 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:02:29.667769 | orchestrator | Sunday 15 February 2026 06:02:03 +0000 (0:00:01.193) 0:08:41.583 ******* 2026-02-15 06:02:29.667774 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667778 | orchestrator | 2026-02-15 06:02:29.667782 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:02:29.667786 | orchestrator | Sunday 15 February 2026 06:02:04 +0000 (0:00:01.159) 0:08:42.742 ******* 2026-02-15 06:02:29.667791 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667795 | orchestrator | 2026-02-15 06:02:29.667799 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:02:29.667803 | orchestrator | Sunday 15 February 2026 06:02:05 +0000 (0:00:01.186) 0:08:43.929 ******* 2026-02-15 06:02:29.667808 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 06:02:29.667812 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-15 06:02:29.667817 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 06:02:29.667821 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667825 | orchestrator | 2026-02-15 06:02:29.667829 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:02:29.667834 | orchestrator | Sunday 15 February 2026 06:02:07 +0000 (0:00:01.732) 0:08:45.662 ******* 2026-02-15 06:02:29.667838 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 06:02:29.667842 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-15 06:02:29.667847 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 06:02:29.667851 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667855 | orchestrator | 2026-02-15 06:02:29.667859 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:02:29.667864 | orchestrator | Sunday 15 February 2026 06:02:09 +0000 (0:00:01.493) 0:08:47.155 ******* 2026-02-15 06:02:29.667868 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 06:02:29.667872 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-15 06:02:29.667876 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 06:02:29.667881 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667886 | orchestrator | 2026-02-15 06:02:29.667891 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:02:29.667896 | orchestrator | Sunday 15 February 2026 06:02:10 +0000 (0:00:01.470) 0:08:48.626 ******* 2026-02-15 06:02:29.667901 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667905 | orchestrator | 2026-02-15 06:02:29.667911 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:02:29.667916 | orchestrator | Sunday 15 February 2026 06:02:11 +0000 (0:00:01.186) 0:08:49.813 ******* 2026-02-15 06:02:29.667921 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-15 06:02:29.667926 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.667931 | orchestrator | 2026-02-15 06:02:29.667935 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:02:29.667940 | orchestrator | Sunday 15 February 2026 06:02:13 +0000 (0:00:01.409) 0:08:51.222 ******* 2026-02-15 06:02:29.667945 | orchestrator | changed: [testbed-node-0] 2026-02-15 06:02:29.667950 | orchestrator | 2026-02-15 06:02:29.667955 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:02:29.667960 | orchestrator | Sunday 15 February 2026 06:02:14 +0000 (0:00:01.716) 0:08:52.938 ******* 2026-02-15 06:02:29.667965 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:02:29.667970 | orchestrator | 2026-02-15 06:02:29.667978 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-15 06:02:29.667987 | orchestrator | Sunday 15 February 2026 06:02:16 +0000 (0:00:01.178) 0:08:54.117 ******* 2026-02-15 06:02:29.667992 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0 2026-02-15 06:02:29.667998 | orchestrator | 2026-02-15 06:02:29.668003 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-15 06:02:29.668007 | orchestrator | Sunday 15 February 2026 06:02:17 +0000 (0:00:01.516) 0:08:55.634 ******* 2026-02-15 06:02:29.668012 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2026-02-15 06:02:29.668017 | orchestrator | 2026-02-15 06:02:29.668022 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-15 06:02:29.668027 | orchestrator | Sunday 15 February 2026 06:02:20 +0000 (0:00:03.399) 0:08:59.033 ******* 2026-02-15 06:02:29.668032 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:02:29.668037 | orchestrator | 2026-02-15 06:02:29.668042 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-15 06:02:29.668047 | orchestrator | Sunday 15 February 2026 06:02:22 +0000 (0:00:01.204) 0:09:00.238 ******* 2026-02-15 06:02:29.668052 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:02:29.668057 | orchestrator | 2026-02-15 06:02:29.668062 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-15 06:02:29.668067 | orchestrator | Sunday 15 February 2026 06:02:23 +0000 (0:00:01.207) 0:09:01.446 ******* 2026-02-15 06:02:29.668072 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:02:29.668076 | orchestrator | 2026-02-15 06:02:29.668081 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-15 06:02:29.668086 | orchestrator | Sunday 15 February 2026 06:02:24 +0000 (0:00:01.157) 0:09:02.603 ******* 2026-02-15 06:02:29.668091 | orchestrator | changed: [testbed-node-0] 2026-02-15 06:02:29.668096 | orchestrator | 2026-02-15 06:02:29.668101 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-15 06:02:29.668105 | orchestrator | Sunday 15 February 2026 06:02:26 +0000 (0:00:02.038) 0:09:04.642 ******* 2026-02-15 06:02:29.668110 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:02:29.668115 | orchestrator | 2026-02-15 06:02:29.668120 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-15 06:02:29.668126 | orchestrator | Sunday 15 February 2026 06:02:28 +0000 (0:00:01.598) 0:09:06.240 ******* 2026-02-15 06:02:29.668131 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:02:29.668135 | orchestrator | 2026-02-15 06:02:29.668142 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-15 06:03:26.523122 | orchestrator | Sunday 15 February 2026 06:02:29 +0000 (0:00:01.517) 0:09:07.759 ******* 2026-02-15 06:03:26.523444 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.523477 | orchestrator | 2026-02-15 06:03:26.523498 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-15 06:03:26.523614 | orchestrator | Sunday 15 February 2026 06:02:31 +0000 (0:00:01.507) 0:09:09.266 ******* 2026-02-15 06:03:26.523650 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.523672 | orchestrator | 2026-02-15 06:03:26.523692 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-15 06:03:26.523712 | orchestrator | Sunday 15 February 2026 06:02:32 +0000 (0:00:01.714) 0:09:10.980 ******* 2026-02-15 06:03:26.523730 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.523750 | orchestrator | 2026-02-15 06:03:26.523769 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-15 06:03:26.523788 | orchestrator | Sunday 15 February 2026 06:02:34 +0000 (0:00:01.775) 0:09:12.756 ******* 2026-02-15 06:03:26.523806 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-15 06:03:26.523826 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-15 06:03:26.523844 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 06:03:26.523863 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-02-15 06:03:26.523882 | orchestrator | 2026-02-15 06:03:26.523899 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-15 06:03:26.524014 | orchestrator | Sunday 15 February 2026 06:02:38 +0000 (0:00:03.825) 0:09:16.582 ******* 2026-02-15 06:03:26.524039 | orchestrator | changed: [testbed-node-0] 2026-02-15 06:03:26.524058 | orchestrator | 2026-02-15 06:03:26.524077 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-15 06:03:26.524095 | orchestrator | Sunday 15 February 2026 06:02:40 +0000 (0:00:02.012) 0:09:18.594 ******* 2026-02-15 06:03:26.524113 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.524130 | orchestrator | 2026-02-15 06:03:26.524147 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-15 06:03:26.524165 | orchestrator | Sunday 15 February 2026 06:02:41 +0000 (0:00:01.154) 0:09:19.748 ******* 2026-02-15 06:03:26.524183 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.524202 | orchestrator | 2026-02-15 06:03:26.524249 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-15 06:03:26.524269 | orchestrator | Sunday 15 February 2026 06:02:42 +0000 (0:00:01.136) 0:09:20.885 ******* 2026-02-15 06:03:26.524286 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.524304 | orchestrator | 2026-02-15 06:03:26.524321 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-15 06:03:26.524339 | orchestrator | Sunday 15 February 2026 06:02:44 +0000 (0:00:02.092) 0:09:22.978 ******* 2026-02-15 06:03:26.524357 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.524374 | orchestrator | 2026-02-15 06:03:26.524390 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-15 06:03:26.524408 | orchestrator | Sunday 15 February 2026 06:02:46 +0000 (0:00:01.478) 0:09:24.456 ******* 2026-02-15 06:03:26.524425 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:03:26.524443 | orchestrator | 2026-02-15 06:03:26.524460 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-15 06:03:26.524478 | orchestrator | Sunday 15 February 2026 06:02:47 +0000 (0:00:01.142) 0:09:25.598 ******* 2026-02-15 06:03:26.524495 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0 2026-02-15 06:03:26.524513 | orchestrator | 2026-02-15 06:03:26.524549 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-15 06:03:26.524567 | orchestrator | Sunday 15 February 2026 06:02:49 +0000 (0:00:01.535) 0:09:27.134 ******* 2026-02-15 06:03:26.524584 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:03:26.524600 | orchestrator | 2026-02-15 06:03:26.524618 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-15 06:03:26.524636 | orchestrator | Sunday 15 February 2026 06:02:50 +0000 (0:00:01.109) 0:09:28.243 ******* 2026-02-15 06:03:26.524655 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:03:26.524673 | orchestrator | 2026-02-15 06:03:26.524691 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-15 06:03:26.524711 | orchestrator | Sunday 15 February 2026 06:02:51 +0000 (0:00:01.146) 0:09:29.390 ******* 2026-02-15 06:03:26.524729 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0 2026-02-15 06:03:26.524746 | orchestrator | 2026-02-15 06:03:26.524763 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-15 06:03:26.524780 | orchestrator | Sunday 15 February 2026 06:02:52 +0000 (0:00:01.503) 0:09:30.893 ******* 2026-02-15 06:03:26.524797 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.524815 | orchestrator | 2026-02-15 06:03:26.524832 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-15 06:03:26.524851 | orchestrator | Sunday 15 February 2026 06:02:55 +0000 (0:00:02.340) 0:09:33.234 ******* 2026-02-15 06:03:26.524868 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.524886 | orchestrator | 2026-02-15 06:03:26.524904 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-15 06:03:26.524923 | orchestrator | Sunday 15 February 2026 06:02:57 +0000 (0:00:01.958) 0:09:35.192 ******* 2026-02-15 06:03:26.524941 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.524981 | orchestrator | 2026-02-15 06:03:26.525000 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-15 06:03:26.525018 | orchestrator | Sunday 15 February 2026 06:02:59 +0000 (0:00:02.339) 0:09:37.532 ******* 2026-02-15 06:03:26.525036 | orchestrator | changed: [testbed-node-0] 2026-02-15 06:03:26.525056 | orchestrator | 2026-02-15 06:03:26.525075 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-15 06:03:26.525094 | orchestrator | Sunday 15 February 2026 06:03:02 +0000 (0:00:03.153) 0:09:40.686 ******* 2026-02-15 06:03:26.525112 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0 2026-02-15 06:03:26.525131 | orchestrator | 2026-02-15 06:03:26.525181 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-15 06:03:26.525201 | orchestrator | Sunday 15 February 2026 06:03:04 +0000 (0:00:01.626) 0:09:42.312 ******* 2026-02-15 06:03:26.525247 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.525348 | orchestrator | 2026-02-15 06:03:26.525368 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-15 06:03:26.525388 | orchestrator | Sunday 15 February 2026 06:03:06 +0000 (0:00:02.207) 0:09:44.520 ******* 2026-02-15 06:03:26.525407 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:03:26.525426 | orchestrator | 2026-02-15 06:03:26.525444 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-15 06:03:26.525464 | orchestrator | Sunday 15 February 2026 06:03:09 +0000 (0:00:02.933) 0:09:47.454 ******* 2026-02-15 06:03:26.525485 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:03:26.525506 | orchestrator | 2026-02-15 06:03:26.525524 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-15 06:03:26.525545 | orchestrator | Sunday 15 February 2026 06:03:10 +0000 (0:00:01.155) 0:09:48.610 ******* 2026-02-15 06:03:26.525569 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-15 06:03:26.525593 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-15 06:03:26.525616 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-15 06:03:26.525638 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-15 06:03:26.525674 | orchestrator | ok: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-15 06:03:26.525696 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}])  2026-02-15 06:03:26.525737 | orchestrator | 2026-02-15 06:03:26.525758 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-15 06:03:26.525778 | orchestrator | Sunday 15 February 2026 06:03:20 +0000 (0:00:09.932) 0:09:58.542 ******* 2026-02-15 06:03:26.525797 | orchestrator | changed: [testbed-node-0] 2026-02-15 06:03:26.525817 | orchestrator | 2026-02-15 06:03:26.525837 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:03:26.525859 | orchestrator | Sunday 15 February 2026 06:03:22 +0000 (0:00:02.505) 0:10:01.048 ******* 2026-02-15 06:03:26.525879 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:03:26.525899 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-15 06:03:26.525920 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-15 06:03:26.525940 | orchestrator | 2026-02-15 06:03:26.525959 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:03:26.525971 | orchestrator | Sunday 15 February 2026 06:03:25 +0000 (0:00:02.155) 0:10:03.204 ******* 2026-02-15 06:03:26.525982 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 06:03:26.525993 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 06:03:26.526003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 06:03:26.526382 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:03:26.526415 | orchestrator | 2026-02-15 06:03:26.526432 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-15 06:03:26.526467 | orchestrator | Sunday 15 February 2026 06:03:26 +0000 (0:00:01.407) 0:10:04.612 ******* 2026-02-15 06:04:04.667689 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:04:04.667807 | orchestrator | 2026-02-15 06:04:04.667824 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-15 06:04:04.667837 | orchestrator | Sunday 15 February 2026 06:03:27 +0000 (0:00:01.141) 0:10:05.753 ******* 2026-02-15 06:04:04.667850 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:04:04.667862 | orchestrator | 2026-02-15 06:04:04.667873 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 06:04:04.667884 | orchestrator | Sunday 15 February 2026 06:03:30 +0000 (0:00:02.432) 0:10:08.186 ******* 2026-02-15 06:04:04.667895 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:04:04.667907 | orchestrator | 2026-02-15 06:04:04.667918 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-15 06:04:04.667929 | orchestrator | Sunday 15 February 2026 06:03:31 +0000 (0:00:01.164) 0:10:09.351 ******* 2026-02-15 06:04:04.667940 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:04:04.667951 | orchestrator | 2026-02-15 06:04:04.667962 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-15 06:04:04.667973 | orchestrator | Sunday 15 February 2026 06:03:32 +0000 (0:00:01.140) 0:10:10.491 ******* 2026-02-15 06:04:04.667983 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:04:04.667994 | orchestrator | 2026-02-15 06:04:04.668006 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-15 06:04:04.668017 | orchestrator | Sunday 15 February 2026 06:03:33 +0000 (0:00:01.117) 0:10:11.609 ******* 2026-02-15 06:04:04.668027 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:04:04.668038 | orchestrator | 2026-02-15 06:04:04.668049 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-15 06:04:04.668060 | orchestrator | Sunday 15 February 2026 06:03:34 +0000 (0:00:01.128) 0:10:12.738 ******* 2026-02-15 06:04:04.668071 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:04:04.668082 | orchestrator | 2026-02-15 06:04:04.668093 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-15 06:04:04.668104 | orchestrator | Sunday 15 February 2026 06:03:35 +0000 (0:00:01.131) 0:10:13.869 ******* 2026-02-15 06:04:04.668138 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:04:04.668150 | orchestrator | 2026-02-15 06:04:04.668161 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-15 06:04:04.668172 | orchestrator | Sunday 15 February 2026 06:03:36 +0000 (0:00:01.152) 0:10:15.022 ******* 2026-02-15 06:04:04.668183 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:04:04.668194 | orchestrator | 2026-02-15 06:04:04.668205 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-15 06:04:04.668216 | orchestrator | 2026-02-15 06:04:04.668227 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-15 06:04:04.668264 | orchestrator | Sunday 15 February 2026 06:03:37 +0000 (0:00:00.985) 0:10:16.008 ******* 2026-02-15 06:04:04.668276 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668287 | orchestrator | 2026-02-15 06:04:04.668298 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-15 06:04:04.668309 | orchestrator | Sunday 15 February 2026 06:03:39 +0000 (0:00:01.609) 0:10:17.617 ******* 2026-02-15 06:04:04.668320 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668330 | orchestrator | 2026-02-15 06:04:04.668341 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-15 06:04:04.668352 | orchestrator | Sunday 15 February 2026 06:03:40 +0000 (0:00:00.796) 0:10:18.414 ******* 2026-02-15 06:04:04.668363 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:04.668374 | orchestrator | 2026-02-15 06:04:04.668400 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-15 06:04:04.668411 | orchestrator | Sunday 15 February 2026 06:03:41 +0000 (0:00:00.762) 0:10:19.176 ******* 2026-02-15 06:04:04.668422 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668433 | orchestrator | 2026-02-15 06:04:04.668444 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:04:04.668454 | orchestrator | Sunday 15 February 2026 06:03:41 +0000 (0:00:00.822) 0:10:19.998 ******* 2026-02-15 06:04:04.668465 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-15 06:04:04.668476 | orchestrator | 2026-02-15 06:04:04.668487 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:04:04.668497 | orchestrator | Sunday 15 February 2026 06:03:43 +0000 (0:00:01.258) 0:10:21.256 ******* 2026-02-15 06:04:04.668508 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668519 | orchestrator | 2026-02-15 06:04:04.668530 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:04:04.668541 | orchestrator | Sunday 15 February 2026 06:03:44 +0000 (0:00:01.462) 0:10:22.719 ******* 2026-02-15 06:04:04.668551 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668562 | orchestrator | 2026-02-15 06:04:04.668573 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:04:04.668584 | orchestrator | Sunday 15 February 2026 06:03:45 +0000 (0:00:01.143) 0:10:23.863 ******* 2026-02-15 06:04:04.668594 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668605 | orchestrator | 2026-02-15 06:04:04.668616 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:04:04.668627 | orchestrator | Sunday 15 February 2026 06:03:47 +0000 (0:00:01.499) 0:10:25.362 ******* 2026-02-15 06:04:04.668638 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668648 | orchestrator | 2026-02-15 06:04:04.668659 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:04:04.668670 | orchestrator | Sunday 15 February 2026 06:03:48 +0000 (0:00:01.160) 0:10:26.523 ******* 2026-02-15 06:04:04.668681 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668691 | orchestrator | 2026-02-15 06:04:04.668702 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:04:04.668713 | orchestrator | Sunday 15 February 2026 06:03:49 +0000 (0:00:01.135) 0:10:27.658 ******* 2026-02-15 06:04:04.668724 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668735 | orchestrator | 2026-02-15 06:04:04.668753 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:04:04.668765 | orchestrator | Sunday 15 February 2026 06:03:50 +0000 (0:00:01.200) 0:10:28.859 ******* 2026-02-15 06:04:04.668793 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:04.668805 | orchestrator | 2026-02-15 06:04:04.668816 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:04:04.668827 | orchestrator | Sunday 15 February 2026 06:03:51 +0000 (0:00:01.171) 0:10:30.031 ******* 2026-02-15 06:04:04.668838 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668848 | orchestrator | 2026-02-15 06:04:04.668859 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:04:04.668870 | orchestrator | Sunday 15 February 2026 06:03:53 +0000 (0:00:01.151) 0:10:31.183 ******* 2026-02-15 06:04:04.668880 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:04:04.668891 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:04:04.668902 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:04:04.668913 | orchestrator | 2026-02-15 06:04:04.668923 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:04:04.668934 | orchestrator | Sunday 15 February 2026 06:03:55 +0000 (0:00:01.984) 0:10:33.168 ******* 2026-02-15 06:04:04.668945 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:04.668955 | orchestrator | 2026-02-15 06:04:04.668966 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:04:04.668977 | orchestrator | Sunday 15 February 2026 06:03:56 +0000 (0:00:01.282) 0:10:34.451 ******* 2026-02-15 06:04:04.668987 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:04:04.668998 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:04:04.669009 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:04:04.669019 | orchestrator | 2026-02-15 06:04:04.669030 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:04:04.669041 | orchestrator | Sunday 15 February 2026 06:03:59 +0000 (0:00:03.207) 0:10:37.658 ******* 2026-02-15 06:04:04.669052 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 06:04:04.669062 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 06:04:04.669073 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 06:04:04.669084 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:04.669094 | orchestrator | 2026-02-15 06:04:04.669105 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:04:04.669116 | orchestrator | Sunday 15 February 2026 06:04:01 +0000 (0:00:01.804) 0:10:39.463 ******* 2026-02-15 06:04:04.669129 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:04:04.669143 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:04:04.669160 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:04:04.669171 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:04.669182 | orchestrator | 2026-02-15 06:04:04.669193 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:04:04.669203 | orchestrator | Sunday 15 February 2026 06:04:03 +0000 (0:00:02.051) 0:10:41.515 ******* 2026-02-15 06:04:04.669216 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:04.669236 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:04.669268 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:04.669279 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:04.669290 | orchestrator | 2026-02-15 06:04:04.669307 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:04:24.573882 | orchestrator | Sunday 15 February 2026 06:04:04 +0000 (0:00:01.233) 0:10:42.748 ******* 2026-02-15 06:04:24.574005 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:03:56.894004', 'end': '2026-02-15 06:03:56.942676', 'delta': '0:00:00.048672', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:04:24.574114 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '3aeb4857506c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:03:57.770364', 'end': '2026-02-15 06:03:57.815279', 'delta': '0:00:00.044915', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3aeb4857506c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:04:24.574141 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '9cffadff9441', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:03:58.375159', 'end': '2026-02-15 06:03:58.427547', 'delta': '0:00:00.052388', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9cffadff9441'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:04:24.574162 | orchestrator | 2026-02-15 06:04:24.574201 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:04:24.574236 | orchestrator | Sunday 15 February 2026 06:04:05 +0000 (0:00:01.208) 0:10:43.958 ******* 2026-02-15 06:04:24.574248 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:24.574288 | orchestrator | 2026-02-15 06:04:24.574300 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:04:24.574311 | orchestrator | Sunday 15 February 2026 06:04:07 +0000 (0:00:01.301) 0:10:45.259 ******* 2026-02-15 06:04:24.574322 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:24.574334 | orchestrator | 2026-02-15 06:04:24.574345 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:04:24.574355 | orchestrator | Sunday 15 February 2026 06:04:08 +0000 (0:00:01.257) 0:10:46.516 ******* 2026-02-15 06:04:24.574366 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:24.574377 | orchestrator | 2026-02-15 06:04:24.574388 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:04:24.574398 | orchestrator | Sunday 15 February 2026 06:04:09 +0000 (0:00:01.146) 0:10:47.662 ******* 2026-02-15 06:04:24.574409 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-15 06:04:24.574423 | orchestrator | 2026-02-15 06:04:24.574436 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:04:24.574448 | orchestrator | Sunday 15 February 2026 06:04:11 +0000 (0:00:01.986) 0:10:49.649 ******* 2026-02-15 06:04:24.574460 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:24.574473 | orchestrator | 2026-02-15 06:04:24.574490 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:04:24.574509 | orchestrator | Sunday 15 February 2026 06:04:12 +0000 (0:00:01.141) 0:10:50.791 ******* 2026-02-15 06:04:24.574526 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:24.574545 | orchestrator | 2026-02-15 06:04:24.574563 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:04:24.574582 | orchestrator | Sunday 15 February 2026 06:04:13 +0000 (0:00:01.148) 0:10:51.939 ******* 2026-02-15 06:04:24.574601 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:24.574620 | orchestrator | 2026-02-15 06:04:24.574641 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:04:24.574660 | orchestrator | Sunday 15 February 2026 06:04:15 +0000 (0:00:01.254) 0:10:53.193 ******* 2026-02-15 06:04:24.574679 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:24.574692 | orchestrator | 2026-02-15 06:04:24.574705 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:04:24.574740 | orchestrator | Sunday 15 February 2026 06:04:16 +0000 (0:00:01.139) 0:10:54.333 ******* 2026-02-15 06:04:24.574755 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:24.574768 | orchestrator | 2026-02-15 06:04:24.574779 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:04:24.574790 | orchestrator | Sunday 15 February 2026 06:04:17 +0000 (0:00:01.126) 0:10:55.459 ******* 2026-02-15 06:04:24.574801 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:24.574812 | orchestrator | 2026-02-15 06:04:24.574822 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:04:24.574833 | orchestrator | Sunday 15 February 2026 06:04:18 +0000 (0:00:01.148) 0:10:56.607 ******* 2026-02-15 06:04:24.574843 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:24.574854 | orchestrator | 2026-02-15 06:04:24.574865 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:04:24.574875 | orchestrator | Sunday 15 February 2026 06:04:19 +0000 (0:00:01.271) 0:10:57.879 ******* 2026-02-15 06:04:24.574886 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:24.574897 | orchestrator | 2026-02-15 06:04:24.574907 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:04:24.574918 | orchestrator | Sunday 15 February 2026 06:04:20 +0000 (0:00:01.155) 0:10:59.035 ******* 2026-02-15 06:04:24.574929 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:24.574939 | orchestrator | 2026-02-15 06:04:24.574950 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:04:24.574972 | orchestrator | Sunday 15 February 2026 06:04:22 +0000 (0:00:01.182) 0:11:00.218 ******* 2026-02-15 06:04:24.574982 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:24.574993 | orchestrator | 2026-02-15 06:04:24.575003 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:04:24.575014 | orchestrator | Sunday 15 February 2026 06:04:23 +0000 (0:00:01.157) 0:11:01.376 ******* 2026-02-15 06:04:24.575027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:04:24.575040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:04:24.575059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:04:24.575072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:04:24.575084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:04:24.575096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:04:24.575115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:04:25.807748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47bb0aa1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:04:25.807869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:04:25.807885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:04:25.807896 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:25.807909 | orchestrator | 2026-02-15 06:04:25.807919 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:04:25.807930 | orchestrator | Sunday 15 February 2026 06:04:24 +0000 (0:00:01.279) 0:11:02.656 ******* 2026-02-15 06:04:25.807941 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:25.807971 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:25.807989 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:25.808001 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:25.808026 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:25.808037 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:25.808048 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:25.808067 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47bb0aa1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:56.957318 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:56.957430 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:04:56.957444 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.957458 | orchestrator | 2026-02-15 06:04:56.957471 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:04:56.957483 | orchestrator | Sunday 15 February 2026 06:04:25 +0000 (0:00:01.243) 0:11:03.900 ******* 2026-02-15 06:04:56.957494 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:56.957502 | orchestrator | 2026-02-15 06:04:56.957509 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:04:56.957515 | orchestrator | Sunday 15 February 2026 06:04:27 +0000 (0:00:01.484) 0:11:05.384 ******* 2026-02-15 06:04:56.957521 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:56.957527 | orchestrator | 2026-02-15 06:04:56.957534 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:04:56.957559 | orchestrator | Sunday 15 February 2026 06:04:28 +0000 (0:00:01.187) 0:11:06.572 ******* 2026-02-15 06:04:56.957566 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:04:56.957572 | orchestrator | 2026-02-15 06:04:56.957578 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:04:56.957584 | orchestrator | Sunday 15 February 2026 06:04:29 +0000 (0:00:01.484) 0:11:08.056 ******* 2026-02-15 06:04:56.957590 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.957596 | orchestrator | 2026-02-15 06:04:56.957602 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:04:56.957608 | orchestrator | Sunday 15 February 2026 06:04:31 +0000 (0:00:01.170) 0:11:09.227 ******* 2026-02-15 06:04:56.957615 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.957621 | orchestrator | 2026-02-15 06:04:56.957627 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:04:56.957633 | orchestrator | Sunday 15 February 2026 06:04:32 +0000 (0:00:01.240) 0:11:10.467 ******* 2026-02-15 06:04:56.957639 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.957645 | orchestrator | 2026-02-15 06:04:56.957655 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:04:56.957665 | orchestrator | Sunday 15 February 2026 06:04:33 +0000 (0:00:01.206) 0:11:11.674 ******* 2026-02-15 06:04:56.957675 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-15 06:04:56.957686 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:04:56.957695 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-15 06:04:56.957705 | orchestrator | 2026-02-15 06:04:56.957715 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:04:56.957725 | orchestrator | Sunday 15 February 2026 06:04:35 +0000 (0:00:02.067) 0:11:13.741 ******* 2026-02-15 06:04:56.957736 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 06:04:56.957747 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 06:04:56.957759 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 06:04:56.957765 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.957771 | orchestrator | 2026-02-15 06:04:56.957777 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:04:56.957784 | orchestrator | Sunday 15 February 2026 06:04:36 +0000 (0:00:01.168) 0:11:14.910 ******* 2026-02-15 06:04:56.957790 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.957797 | orchestrator | 2026-02-15 06:04:56.957804 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:04:56.957811 | orchestrator | Sunday 15 February 2026 06:04:37 +0000 (0:00:01.154) 0:11:16.065 ******* 2026-02-15 06:04:56.957818 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:04:56.957827 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:04:56.957834 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:04:56.957841 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:04:56.957848 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:04:56.957855 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:04:56.957878 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:04:56.957886 | orchestrator | 2026-02-15 06:04:56.957893 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:04:56.957900 | orchestrator | Sunday 15 February 2026 06:04:39 +0000 (0:00:01.937) 0:11:18.003 ******* 2026-02-15 06:04:56.957913 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:04:56.957921 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:04:56.957928 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:04:56.957941 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:04:56.957948 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:04:56.957955 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:04:56.957962 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:04:56.957969 | orchestrator | 2026-02-15 06:04:56.957976 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-15 06:04:56.957983 | orchestrator | Sunday 15 February 2026 06:04:42 +0000 (0:00:02.243) 0:11:20.246 ******* 2026-02-15 06:04:56.957990 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.957997 | orchestrator | 2026-02-15 06:04:56.958004 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-15 06:04:56.958012 | orchestrator | Sunday 15 February 2026 06:04:43 +0000 (0:00:00.900) 0:11:21.146 ******* 2026-02-15 06:04:56.958059 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.958066 | orchestrator | 2026-02-15 06:04:56.958072 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-15 06:04:56.958079 | orchestrator | Sunday 15 February 2026 06:04:43 +0000 (0:00:00.906) 0:11:22.053 ******* 2026-02-15 06:04:56.958085 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.958091 | orchestrator | 2026-02-15 06:04:56.958097 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-15 06:04:56.958103 | orchestrator | Sunday 15 February 2026 06:04:44 +0000 (0:00:00.854) 0:11:22.907 ******* 2026-02-15 06:04:56.958109 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.958115 | orchestrator | 2026-02-15 06:04:56.958121 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-15 06:04:56.958127 | orchestrator | Sunday 15 February 2026 06:04:45 +0000 (0:00:00.877) 0:11:23.785 ******* 2026-02-15 06:04:56.958134 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.958140 | orchestrator | 2026-02-15 06:04:56.958146 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-15 06:04:56.958152 | orchestrator | Sunday 15 February 2026 06:04:46 +0000 (0:00:00.811) 0:11:24.596 ******* 2026-02-15 06:04:56.958158 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 06:04:56.958165 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 06:04:56.958171 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 06:04:56.958177 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.958183 | orchestrator | 2026-02-15 06:04:56.958189 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-15 06:04:56.958195 | orchestrator | Sunday 15 February 2026 06:04:47 +0000 (0:00:01.092) 0:11:25.689 ******* 2026-02-15 06:04:56.958202 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-15 06:04:56.958208 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-15 06:04:56.958214 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-15 06:04:56.958220 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-15 06:04:56.958226 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-15 06:04:56.958232 | orchestrator | skipping: [testbed-node-1] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-15 06:04:56.958239 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:04:56.958245 | orchestrator | 2026-02-15 06:04:56.958251 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-15 06:04:56.958257 | orchestrator | Sunday 15 February 2026 06:04:49 +0000 (0:00:01.815) 0:11:27.504 ******* 2026-02-15 06:04:56.958263 | orchestrator | changed: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:04:56.958299 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:04:56.958307 | orchestrator | 2026-02-15 06:04:56.958313 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-15 06:04:56.958319 | orchestrator | Sunday 15 February 2026 06:04:52 +0000 (0:00:03.068) 0:11:30.572 ******* 2026-02-15 06:04:56.958325 | orchestrator | changed: [testbed-node-1] 2026-02-15 06:04:56.958331 | orchestrator | 2026-02-15 06:04:56.958337 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:04:56.958343 | orchestrator | Sunday 15 February 2026 06:04:54 +0000 (0:00:02.118) 0:11:32.691 ******* 2026-02-15 06:04:56.958349 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-15 06:04:56.958356 | orchestrator | 2026-02-15 06:04:56.958362 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:04:56.958368 | orchestrator | Sunday 15 February 2026 06:04:55 +0000 (0:00:01.232) 0:11:33.924 ******* 2026-02-15 06:04:56.958374 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-15 06:04:56.958380 | orchestrator | 2026-02-15 06:04:56.958387 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:04:56.958398 | orchestrator | Sunday 15 February 2026 06:04:56 +0000 (0:00:01.119) 0:11:35.044 ******* 2026-02-15 06:05:40.067287 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.067430 | orchestrator | 2026-02-15 06:05:40.067448 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:05:40.067461 | orchestrator | Sunday 15 February 2026 06:04:58 +0000 (0:00:01.593) 0:11:36.638 ******* 2026-02-15 06:05:40.067473 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.067485 | orchestrator | 2026-02-15 06:05:40.067497 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:05:40.067508 | orchestrator | Sunday 15 February 2026 06:04:59 +0000 (0:00:01.155) 0:11:37.793 ******* 2026-02-15 06:05:40.067519 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.067530 | orchestrator | 2026-02-15 06:05:40.067541 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:05:40.067552 | orchestrator | Sunday 15 February 2026 06:05:00 +0000 (0:00:01.199) 0:11:38.992 ******* 2026-02-15 06:05:40.067562 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.067573 | orchestrator | 2026-02-15 06:05:40.067584 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:05:40.067595 | orchestrator | Sunday 15 February 2026 06:05:02 +0000 (0:00:01.122) 0:11:40.115 ******* 2026-02-15 06:05:40.067605 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.067617 | orchestrator | 2026-02-15 06:05:40.067627 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:05:40.067638 | orchestrator | Sunday 15 February 2026 06:05:03 +0000 (0:00:01.549) 0:11:41.664 ******* 2026-02-15 06:05:40.067649 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.067659 | orchestrator | 2026-02-15 06:05:40.067670 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:05:40.067681 | orchestrator | Sunday 15 February 2026 06:05:04 +0000 (0:00:01.151) 0:11:42.816 ******* 2026-02-15 06:05:40.067692 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.067703 | orchestrator | 2026-02-15 06:05:40.067714 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:05:40.067725 | orchestrator | Sunday 15 February 2026 06:05:05 +0000 (0:00:01.226) 0:11:44.043 ******* 2026-02-15 06:05:40.067735 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.067746 | orchestrator | 2026-02-15 06:05:40.067757 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:05:40.067768 | orchestrator | Sunday 15 February 2026 06:05:07 +0000 (0:00:01.605) 0:11:45.648 ******* 2026-02-15 06:05:40.067779 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.067790 | orchestrator | 2026-02-15 06:05:40.067801 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:05:40.067836 | orchestrator | Sunday 15 February 2026 06:05:09 +0000 (0:00:01.514) 0:11:47.163 ******* 2026-02-15 06:05:40.067852 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.067865 | orchestrator | 2026-02-15 06:05:40.067878 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:05:40.067891 | orchestrator | Sunday 15 February 2026 06:05:09 +0000 (0:00:00.760) 0:11:47.923 ******* 2026-02-15 06:05:40.067904 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.067917 | orchestrator | 2026-02-15 06:05:40.067930 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:05:40.067943 | orchestrator | Sunday 15 February 2026 06:05:10 +0000 (0:00:00.894) 0:11:48.818 ******* 2026-02-15 06:05:40.067956 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.067969 | orchestrator | 2026-02-15 06:05:40.067982 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:05:40.067994 | orchestrator | Sunday 15 February 2026 06:05:11 +0000 (0:00:00.753) 0:11:49.572 ******* 2026-02-15 06:05:40.068007 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068020 | orchestrator | 2026-02-15 06:05:40.068032 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:05:40.068045 | orchestrator | Sunday 15 February 2026 06:05:12 +0000 (0:00:00.876) 0:11:50.449 ******* 2026-02-15 06:05:40.068057 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068070 | orchestrator | 2026-02-15 06:05:40.068083 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:05:40.068096 | orchestrator | Sunday 15 February 2026 06:05:13 +0000 (0:00:00.748) 0:11:51.197 ******* 2026-02-15 06:05:40.068109 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068121 | orchestrator | 2026-02-15 06:05:40.068177 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:05:40.068191 | orchestrator | Sunday 15 February 2026 06:05:13 +0000 (0:00:00.782) 0:11:51.980 ******* 2026-02-15 06:05:40.068202 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068213 | orchestrator | 2026-02-15 06:05:40.068224 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:05:40.068235 | orchestrator | Sunday 15 February 2026 06:05:14 +0000 (0:00:00.767) 0:11:52.747 ******* 2026-02-15 06:05:40.068246 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.068256 | orchestrator | 2026-02-15 06:05:40.068267 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:05:40.068278 | orchestrator | Sunday 15 February 2026 06:05:15 +0000 (0:00:00.832) 0:11:53.580 ******* 2026-02-15 06:05:40.068289 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.068318 | orchestrator | 2026-02-15 06:05:40.068329 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:05:40.068340 | orchestrator | Sunday 15 February 2026 06:05:16 +0000 (0:00:00.789) 0:11:54.369 ******* 2026-02-15 06:05:40.068351 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.068362 | orchestrator | 2026-02-15 06:05:40.068373 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:05:40.068383 | orchestrator | Sunday 15 February 2026 06:05:17 +0000 (0:00:00.880) 0:11:55.250 ******* 2026-02-15 06:05:40.068394 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068405 | orchestrator | 2026-02-15 06:05:40.068416 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:05:40.068426 | orchestrator | Sunday 15 February 2026 06:05:17 +0000 (0:00:00.818) 0:11:56.068 ******* 2026-02-15 06:05:40.068437 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068448 | orchestrator | 2026-02-15 06:05:40.068459 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:05:40.068487 | orchestrator | Sunday 15 February 2026 06:05:18 +0000 (0:00:00.779) 0:11:56.848 ******* 2026-02-15 06:05:40.068499 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068510 | orchestrator | 2026-02-15 06:05:40.068521 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:05:40.068548 | orchestrator | Sunday 15 February 2026 06:05:19 +0000 (0:00:00.851) 0:11:57.699 ******* 2026-02-15 06:05:40.068559 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068570 | orchestrator | 2026-02-15 06:05:40.068581 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:05:40.068591 | orchestrator | Sunday 15 February 2026 06:05:20 +0000 (0:00:00.807) 0:11:58.506 ******* 2026-02-15 06:05:40.068602 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068613 | orchestrator | 2026-02-15 06:05:40.068624 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:05:40.068634 | orchestrator | Sunday 15 February 2026 06:05:21 +0000 (0:00:00.770) 0:11:59.277 ******* 2026-02-15 06:05:40.068645 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068656 | orchestrator | 2026-02-15 06:05:40.068667 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:05:40.068677 | orchestrator | Sunday 15 February 2026 06:05:21 +0000 (0:00:00.760) 0:12:00.038 ******* 2026-02-15 06:05:40.068688 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068699 | orchestrator | 2026-02-15 06:05:40.068710 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:05:40.068721 | orchestrator | Sunday 15 February 2026 06:05:22 +0000 (0:00:00.801) 0:12:00.840 ******* 2026-02-15 06:05:40.068732 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068742 | orchestrator | 2026-02-15 06:05:40.068753 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:05:40.068764 | orchestrator | Sunday 15 February 2026 06:05:23 +0000 (0:00:00.818) 0:12:01.658 ******* 2026-02-15 06:05:40.068775 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068785 | orchestrator | 2026-02-15 06:05:40.068796 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:05:40.068807 | orchestrator | Sunday 15 February 2026 06:05:24 +0000 (0:00:00.764) 0:12:02.422 ******* 2026-02-15 06:05:40.068817 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068828 | orchestrator | 2026-02-15 06:05:40.068839 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:05:40.068850 | orchestrator | Sunday 15 February 2026 06:05:25 +0000 (0:00:00.782) 0:12:03.205 ******* 2026-02-15 06:05:40.068860 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068871 | orchestrator | 2026-02-15 06:05:40.068882 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:05:40.068893 | orchestrator | Sunday 15 February 2026 06:05:25 +0000 (0:00:00.815) 0:12:04.020 ******* 2026-02-15 06:05:40.068903 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.068914 | orchestrator | 2026-02-15 06:05:40.068925 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:05:40.068936 | orchestrator | Sunday 15 February 2026 06:05:26 +0000 (0:00:00.793) 0:12:04.814 ******* 2026-02-15 06:05:40.068946 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.068957 | orchestrator | 2026-02-15 06:05:40.068968 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:05:40.068979 | orchestrator | Sunday 15 February 2026 06:05:28 +0000 (0:00:01.628) 0:12:06.442 ******* 2026-02-15 06:05:40.068989 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.069000 | orchestrator | 2026-02-15 06:05:40.069011 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:05:40.069022 | orchestrator | Sunday 15 February 2026 06:05:30 +0000 (0:00:02.064) 0:12:08.507 ******* 2026-02-15 06:05:40.069032 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-15 06:05:40.069044 | orchestrator | 2026-02-15 06:05:40.069054 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:05:40.069065 | orchestrator | Sunday 15 February 2026 06:05:31 +0000 (0:00:01.245) 0:12:09.752 ******* 2026-02-15 06:05:40.069076 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.069087 | orchestrator | 2026-02-15 06:05:40.069104 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:05:40.069115 | orchestrator | Sunday 15 February 2026 06:05:32 +0000 (0:00:01.116) 0:12:10.869 ******* 2026-02-15 06:05:40.069126 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.069136 | orchestrator | 2026-02-15 06:05:40.069147 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:05:40.069158 | orchestrator | Sunday 15 February 2026 06:05:33 +0000 (0:00:01.225) 0:12:12.094 ******* 2026-02-15 06:05:40.069169 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:05:40.069180 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:05:40.069190 | orchestrator | 2026-02-15 06:05:40.069201 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:05:40.069212 | orchestrator | Sunday 15 February 2026 06:05:35 +0000 (0:00:01.828) 0:12:13.923 ******* 2026-02-15 06:05:40.069222 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:05:40.069233 | orchestrator | 2026-02-15 06:05:40.069244 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:05:40.069255 | orchestrator | Sunday 15 February 2026 06:05:37 +0000 (0:00:01.499) 0:12:15.422 ******* 2026-02-15 06:05:40.069266 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.069276 | orchestrator | 2026-02-15 06:05:40.069287 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:05:40.069333 | orchestrator | Sunday 15 February 2026 06:05:38 +0000 (0:00:01.208) 0:12:16.631 ******* 2026-02-15 06:05:40.069346 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:05:40.069357 | orchestrator | 2026-02-15 06:05:40.069367 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:05:40.069378 | orchestrator | Sunday 15 February 2026 06:05:39 +0000 (0:00:00.767) 0:12:17.398 ******* 2026-02-15 06:05:40.069396 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.676613 | orchestrator | 2026-02-15 06:06:20.676739 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:06:20.676759 | orchestrator | Sunday 15 February 2026 06:05:40 +0000 (0:00:00.759) 0:12:18.158 ******* 2026-02-15 06:06:20.676791 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-15 06:06:20.676805 | orchestrator | 2026-02-15 06:06:20.676818 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:06:20.676831 | orchestrator | Sunday 15 February 2026 06:05:41 +0000 (0:00:01.135) 0:12:19.294 ******* 2026-02-15 06:06:20.676844 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:06:20.676858 | orchestrator | 2026-02-15 06:06:20.676870 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:06:20.676884 | orchestrator | Sunday 15 February 2026 06:05:42 +0000 (0:00:01.742) 0:12:21.036 ******* 2026-02-15 06:06:20.676897 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:06:20.676910 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:06:20.676922 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:06:20.676936 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.676950 | orchestrator | 2026-02-15 06:06:20.676962 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:06:20.676974 | orchestrator | Sunday 15 February 2026 06:05:44 +0000 (0:00:01.197) 0:12:22.233 ******* 2026-02-15 06:06:20.676986 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.676998 | orchestrator | 2026-02-15 06:06:20.677010 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:06:20.677023 | orchestrator | Sunday 15 February 2026 06:05:45 +0000 (0:00:01.171) 0:12:23.405 ******* 2026-02-15 06:06:20.677035 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677047 | orchestrator | 2026-02-15 06:06:20.677060 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:06:20.677100 | orchestrator | Sunday 15 February 2026 06:05:46 +0000 (0:00:01.238) 0:12:24.643 ******* 2026-02-15 06:06:20.677114 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677127 | orchestrator | 2026-02-15 06:06:20.677141 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:06:20.677155 | orchestrator | Sunday 15 February 2026 06:05:47 +0000 (0:00:01.180) 0:12:25.824 ******* 2026-02-15 06:06:20.677168 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677182 | orchestrator | 2026-02-15 06:06:20.677195 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:06:20.677207 | orchestrator | Sunday 15 February 2026 06:05:48 +0000 (0:00:01.254) 0:12:27.078 ******* 2026-02-15 06:06:20.677220 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677233 | orchestrator | 2026-02-15 06:06:20.677247 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:06:20.677260 | orchestrator | Sunday 15 February 2026 06:05:49 +0000 (0:00:00.797) 0:12:27.876 ******* 2026-02-15 06:06:20.677274 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:06:20.677290 | orchestrator | 2026-02-15 06:06:20.677303 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:06:20.677317 | orchestrator | Sunday 15 February 2026 06:05:51 +0000 (0:00:02.111) 0:12:29.987 ******* 2026-02-15 06:06:20.677364 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:06:20.677377 | orchestrator | 2026-02-15 06:06:20.677390 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:06:20.677403 | orchestrator | Sunday 15 February 2026 06:05:52 +0000 (0:00:00.827) 0:12:30.814 ******* 2026-02-15 06:06:20.677416 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-15 06:06:20.677429 | orchestrator | 2026-02-15 06:06:20.677442 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:06:20.677454 | orchestrator | Sunday 15 February 2026 06:05:53 +0000 (0:00:01.159) 0:12:31.974 ******* 2026-02-15 06:06:20.677467 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677480 | orchestrator | 2026-02-15 06:06:20.677492 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:06:20.677504 | orchestrator | Sunday 15 February 2026 06:05:55 +0000 (0:00:01.197) 0:12:33.172 ******* 2026-02-15 06:06:20.677516 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677526 | orchestrator | 2026-02-15 06:06:20.677537 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:06:20.677549 | orchestrator | Sunday 15 February 2026 06:05:56 +0000 (0:00:01.167) 0:12:34.339 ******* 2026-02-15 06:06:20.677560 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677570 | orchestrator | 2026-02-15 06:06:20.677580 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:06:20.677591 | orchestrator | Sunday 15 February 2026 06:05:57 +0000 (0:00:01.115) 0:12:35.455 ******* 2026-02-15 06:06:20.677601 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677611 | orchestrator | 2026-02-15 06:06:20.677622 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:06:20.677710 | orchestrator | Sunday 15 February 2026 06:05:58 +0000 (0:00:01.205) 0:12:36.661 ******* 2026-02-15 06:06:20.677719 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677726 | orchestrator | 2026-02-15 06:06:20.677733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:06:20.677740 | orchestrator | Sunday 15 February 2026 06:05:59 +0000 (0:00:01.154) 0:12:37.815 ******* 2026-02-15 06:06:20.677747 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677753 | orchestrator | 2026-02-15 06:06:20.677760 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:06:20.677767 | orchestrator | Sunday 15 February 2026 06:06:00 +0000 (0:00:01.159) 0:12:38.975 ******* 2026-02-15 06:06:20.677773 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677792 | orchestrator | 2026-02-15 06:06:20.677799 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:06:20.677827 | orchestrator | Sunday 15 February 2026 06:06:02 +0000 (0:00:01.152) 0:12:40.128 ******* 2026-02-15 06:06:20.677834 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.677841 | orchestrator | 2026-02-15 06:06:20.677848 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:06:20.677863 | orchestrator | Sunday 15 February 2026 06:06:03 +0000 (0:00:01.180) 0:12:41.309 ******* 2026-02-15 06:06:20.677870 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:06:20.677877 | orchestrator | 2026-02-15 06:06:20.677884 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:06:20.677891 | orchestrator | Sunday 15 February 2026 06:06:04 +0000 (0:00:00.815) 0:12:42.124 ******* 2026-02-15 06:06:20.677897 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-15 06:06:20.677905 | orchestrator | 2026-02-15 06:06:20.677911 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:06:20.677918 | orchestrator | Sunday 15 February 2026 06:06:05 +0000 (0:00:01.268) 0:12:43.392 ******* 2026-02-15 06:06:20.677925 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-15 06:06:20.677932 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-15 06:06:20.677940 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-15 06:06:20.677946 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-15 06:06:20.677953 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-15 06:06:20.677960 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-15 06:06:20.677966 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-15 06:06:20.677973 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:06:20.677980 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:06:20.677986 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:06:20.677993 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:06:20.678000 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:06:20.678006 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:06:20.678013 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:06:20.678068 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-15 06:06:20.678076 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-15 06:06:20.678083 | orchestrator | 2026-02-15 06:06:20.678089 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:06:20.678096 | orchestrator | Sunday 15 February 2026 06:06:11 +0000 (0:00:06.438) 0:12:49.831 ******* 2026-02-15 06:06:20.678103 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678109 | orchestrator | 2026-02-15 06:06:20.678116 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:06:20.678123 | orchestrator | Sunday 15 February 2026 06:06:12 +0000 (0:00:00.800) 0:12:50.631 ******* 2026-02-15 06:06:20.678129 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678136 | orchestrator | 2026-02-15 06:06:20.678143 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:06:20.678149 | orchestrator | Sunday 15 February 2026 06:06:13 +0000 (0:00:00.798) 0:12:51.430 ******* 2026-02-15 06:06:20.678156 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678163 | orchestrator | 2026-02-15 06:06:20.678169 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:06:20.678176 | orchestrator | Sunday 15 February 2026 06:06:14 +0000 (0:00:00.801) 0:12:52.231 ******* 2026-02-15 06:06:20.678183 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678189 | orchestrator | 2026-02-15 06:06:20.678196 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:06:20.678209 | orchestrator | Sunday 15 February 2026 06:06:14 +0000 (0:00:00.787) 0:12:53.019 ******* 2026-02-15 06:06:20.678216 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678222 | orchestrator | 2026-02-15 06:06:20.678229 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:06:20.678236 | orchestrator | Sunday 15 February 2026 06:06:15 +0000 (0:00:00.850) 0:12:53.869 ******* 2026-02-15 06:06:20.678243 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678249 | orchestrator | 2026-02-15 06:06:20.678256 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:06:20.678263 | orchestrator | Sunday 15 February 2026 06:06:16 +0000 (0:00:00.783) 0:12:54.653 ******* 2026-02-15 06:06:20.678270 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678276 | orchestrator | 2026-02-15 06:06:20.678283 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:06:20.678290 | orchestrator | Sunday 15 February 2026 06:06:17 +0000 (0:00:00.770) 0:12:55.424 ******* 2026-02-15 06:06:20.678296 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678303 | orchestrator | 2026-02-15 06:06:20.678310 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:06:20.678316 | orchestrator | Sunday 15 February 2026 06:06:18 +0000 (0:00:00.884) 0:12:56.309 ******* 2026-02-15 06:06:20.678361 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678371 | orchestrator | 2026-02-15 06:06:20.678378 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:06:20.678384 | orchestrator | Sunday 15 February 2026 06:06:18 +0000 (0:00:00.788) 0:12:57.097 ******* 2026-02-15 06:06:20.678391 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678397 | orchestrator | 2026-02-15 06:06:20.678404 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:06:20.678410 | orchestrator | Sunday 15 February 2026 06:06:19 +0000 (0:00:00.785) 0:12:57.882 ******* 2026-02-15 06:06:20.678417 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:06:20.678424 | orchestrator | 2026-02-15 06:06:20.678436 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:07:08.385183 | orchestrator | Sunday 15 February 2026 06:06:20 +0000 (0:00:00.882) 0:12:58.765 ******* 2026-02-15 06:07:08.385301 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385319 | orchestrator | 2026-02-15 06:07:08.385333 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:07:08.385418 | orchestrator | Sunday 15 February 2026 06:06:21 +0000 (0:00:00.816) 0:12:59.582 ******* 2026-02-15 06:07:08.385433 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385444 | orchestrator | 2026-02-15 06:07:08.385456 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:07:08.385467 | orchestrator | Sunday 15 February 2026 06:06:22 +0000 (0:00:00.904) 0:13:00.487 ******* 2026-02-15 06:07:08.385478 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385490 | orchestrator | 2026-02-15 06:07:08.385501 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:07:08.385512 | orchestrator | Sunday 15 February 2026 06:06:23 +0000 (0:00:00.777) 0:13:01.265 ******* 2026-02-15 06:07:08.385523 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385534 | orchestrator | 2026-02-15 06:07:08.385545 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:07:08.385556 | orchestrator | Sunday 15 February 2026 06:06:24 +0000 (0:00:00.863) 0:13:02.129 ******* 2026-02-15 06:07:08.385567 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385578 | orchestrator | 2026-02-15 06:07:08.385589 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:07:08.385600 | orchestrator | Sunday 15 February 2026 06:06:24 +0000 (0:00:00.763) 0:13:02.893 ******* 2026-02-15 06:07:08.385611 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385644 | orchestrator | 2026-02-15 06:07:08.385657 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:07:08.385671 | orchestrator | Sunday 15 February 2026 06:06:25 +0000 (0:00:00.795) 0:13:03.689 ******* 2026-02-15 06:07:08.385682 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385693 | orchestrator | 2026-02-15 06:07:08.385706 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:07:08.385719 | orchestrator | Sunday 15 February 2026 06:06:26 +0000 (0:00:00.778) 0:13:04.467 ******* 2026-02-15 06:07:08.385732 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385745 | orchestrator | 2026-02-15 06:07:08.385757 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:07:08.385771 | orchestrator | Sunday 15 February 2026 06:06:27 +0000 (0:00:00.897) 0:13:05.364 ******* 2026-02-15 06:07:08.385784 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385796 | orchestrator | 2026-02-15 06:07:08.385809 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:07:08.385822 | orchestrator | Sunday 15 February 2026 06:06:28 +0000 (0:00:00.830) 0:13:06.194 ******* 2026-02-15 06:07:08.385835 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385848 | orchestrator | 2026-02-15 06:07:08.385860 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:07:08.385873 | orchestrator | Sunday 15 February 2026 06:06:28 +0000 (0:00:00.782) 0:13:06.977 ******* 2026-02-15 06:07:08.385886 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-15 06:07:08.385899 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-15 06:07:08.385913 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-15 06:07:08.385925 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.385938 | orchestrator | 2026-02-15 06:07:08.385950 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:07:08.385963 | orchestrator | Sunday 15 February 2026 06:06:29 +0000 (0:00:01.117) 0:13:08.095 ******* 2026-02-15 06:07:08.385976 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-15 06:07:08.385988 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-15 06:07:08.386002 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-15 06:07:08.386015 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.386086 | orchestrator | 2026-02-15 06:07:08.386100 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:07:08.386111 | orchestrator | Sunday 15 February 2026 06:06:31 +0000 (0:00:01.087) 0:13:09.182 ******* 2026-02-15 06:07:08.386123 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-15 06:07:08.386133 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-15 06:07:08.386145 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-15 06:07:08.386155 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.386166 | orchestrator | 2026-02-15 06:07:08.386177 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:07:08.386188 | orchestrator | Sunday 15 February 2026 06:06:32 +0000 (0:00:01.050) 0:13:10.233 ******* 2026-02-15 06:07:08.386199 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.386210 | orchestrator | 2026-02-15 06:07:08.386221 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:07:08.386232 | orchestrator | Sunday 15 February 2026 06:06:32 +0000 (0:00:00.778) 0:13:11.012 ******* 2026-02-15 06:07:08.386244 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-15 06:07:08.386255 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.386266 | orchestrator | 2026-02-15 06:07:08.386276 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:07:08.386287 | orchestrator | Sunday 15 February 2026 06:06:33 +0000 (0:00:00.925) 0:13:11.937 ******* 2026-02-15 06:07:08.386307 | orchestrator | changed: [testbed-node-1] 2026-02-15 06:07:08.386318 | orchestrator | 2026-02-15 06:07:08.386328 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:07:08.386340 | orchestrator | Sunday 15 February 2026 06:06:35 +0000 (0:00:01.498) 0:13:13.436 ******* 2026-02-15 06:07:08.386369 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:07:08.386381 | orchestrator | 2026-02-15 06:07:08.386392 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-15 06:07:08.386420 | orchestrator | Sunday 15 February 2026 06:06:36 +0000 (0:00:00.928) 0:13:14.365 ******* 2026-02-15 06:07:08.386432 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1 2026-02-15 06:07:08.386444 | orchestrator | 2026-02-15 06:07:08.386461 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-15 06:07:08.386472 | orchestrator | Sunday 15 February 2026 06:06:37 +0000 (0:00:01.244) 0:13:15.609 ******* 2026-02-15 06:07:08.386483 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] 2026-02-15 06:07:08.386494 | orchestrator | 2026-02-15 06:07:08.386505 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-15 06:07:08.386515 | orchestrator | Sunday 15 February 2026 06:06:40 +0000 (0:00:03.240) 0:13:18.850 ******* 2026-02-15 06:07:08.386526 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.386537 | orchestrator | 2026-02-15 06:07:08.386547 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-15 06:07:08.386558 | orchestrator | Sunday 15 February 2026 06:06:41 +0000 (0:00:01.189) 0:13:20.039 ******* 2026-02-15 06:07:08.386569 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:07:08.386580 | orchestrator | 2026-02-15 06:07:08.386590 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-15 06:07:08.386601 | orchestrator | Sunday 15 February 2026 06:06:43 +0000 (0:00:01.154) 0:13:21.194 ******* 2026-02-15 06:07:08.386612 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:07:08.386622 | orchestrator | 2026-02-15 06:07:08.386633 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-15 06:07:08.386644 | orchestrator | Sunday 15 February 2026 06:06:44 +0000 (0:00:01.175) 0:13:22.370 ******* 2026-02-15 06:07:08.386655 | orchestrator | changed: [testbed-node-1] 2026-02-15 06:07:08.386665 | orchestrator | 2026-02-15 06:07:08.386676 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-15 06:07:08.386686 | orchestrator | Sunday 15 February 2026 06:06:46 +0000 (0:00:01.989) 0:13:24.359 ******* 2026-02-15 06:07:08.386697 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:07:08.386708 | orchestrator | 2026-02-15 06:07:08.386718 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-15 06:07:08.386729 | orchestrator | Sunday 15 February 2026 06:06:47 +0000 (0:00:01.671) 0:13:26.031 ******* 2026-02-15 06:07:08.386740 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:07:08.386751 | orchestrator | 2026-02-15 06:07:08.386768 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-15 06:07:08.386788 | orchestrator | Sunday 15 February 2026 06:06:49 +0000 (0:00:01.548) 0:13:27.580 ******* 2026-02-15 06:07:08.386808 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:07:08.386838 | orchestrator | 2026-02-15 06:07:08.386859 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-15 06:07:08.386880 | orchestrator | Sunday 15 February 2026 06:06:51 +0000 (0:00:01.585) 0:13:29.165 ******* 2026-02-15 06:07:08.386900 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:07:08.386919 | orchestrator | 2026-02-15 06:07:08.386939 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-15 06:07:08.386960 | orchestrator | Sunday 15 February 2026 06:06:52 +0000 (0:00:01.655) 0:13:30.821 ******* 2026-02-15 06:07:08.386980 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:07:08.387000 | orchestrator | 2026-02-15 06:07:08.387012 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-15 06:07:08.387034 | orchestrator | Sunday 15 February 2026 06:06:54 +0000 (0:00:01.623) 0:13:32.444 ******* 2026-02-15 06:07:08.387044 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:07:08.387055 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-15 06:07:08.387067 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 06:07:08.387077 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-02-15 06:07:08.387088 | orchestrator | 2026-02-15 06:07:08.387099 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-15 06:07:08.387110 | orchestrator | Sunday 15 February 2026 06:06:58 +0000 (0:00:04.307) 0:13:36.752 ******* 2026-02-15 06:07:08.387120 | orchestrator | changed: [testbed-node-1] 2026-02-15 06:07:08.387131 | orchestrator | 2026-02-15 06:07:08.387142 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-15 06:07:08.387152 | orchestrator | Sunday 15 February 2026 06:07:00 +0000 (0:00:02.063) 0:13:38.815 ******* 2026-02-15 06:07:08.387163 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:07:08.387174 | orchestrator | 2026-02-15 06:07:08.387185 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-15 06:07:08.387195 | orchestrator | Sunday 15 February 2026 06:07:01 +0000 (0:00:01.226) 0:13:40.042 ******* 2026-02-15 06:07:08.387206 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:07:08.387217 | orchestrator | 2026-02-15 06:07:08.387228 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-15 06:07:08.387239 | orchestrator | Sunday 15 February 2026 06:07:03 +0000 (0:00:01.147) 0:13:41.189 ******* 2026-02-15 06:07:08.387249 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:07:08.387260 | orchestrator | 2026-02-15 06:07:08.387271 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-15 06:07:08.387282 | orchestrator | Sunday 15 February 2026 06:07:04 +0000 (0:00:01.777) 0:13:42.966 ******* 2026-02-15 06:07:08.387292 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:07:08.387303 | orchestrator | 2026-02-15 06:07:08.387314 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-15 06:07:08.387324 | orchestrator | Sunday 15 February 2026 06:07:06 +0000 (0:00:01.522) 0:13:44.489 ******* 2026-02-15 06:07:08.387335 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:07:08.387346 | orchestrator | 2026-02-15 06:07:08.387403 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-15 06:07:08.387417 | orchestrator | Sunday 15 February 2026 06:07:07 +0000 (0:00:00.839) 0:13:45.328 ******* 2026-02-15 06:07:08.387428 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-1 2026-02-15 06:07:08.387439 | orchestrator | 2026-02-15 06:07:08.387461 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-15 06:08:14.757270 | orchestrator | Sunday 15 February 2026 06:07:08 +0000 (0:00:01.145) 0:13:46.473 ******* 2026-02-15 06:08:14.757468 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.757489 | orchestrator | 2026-02-15 06:08:14.757518 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-15 06:08:14.757530 | orchestrator | Sunday 15 February 2026 06:07:09 +0000 (0:00:01.144) 0:13:47.618 ******* 2026-02-15 06:08:14.757541 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.757552 | orchestrator | 2026-02-15 06:08:14.757564 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-15 06:08:14.757574 | orchestrator | Sunday 15 February 2026 06:07:10 +0000 (0:00:01.145) 0:13:48.764 ******* 2026-02-15 06:08:14.757585 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-1 2026-02-15 06:08:14.757596 | orchestrator | 2026-02-15 06:08:14.757607 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-15 06:08:14.757618 | orchestrator | Sunday 15 February 2026 06:07:11 +0000 (0:00:01.127) 0:13:49.891 ******* 2026-02-15 06:08:14.757628 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:08:14.757640 | orchestrator | 2026-02-15 06:08:14.757674 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-15 06:08:14.757686 | orchestrator | Sunday 15 February 2026 06:07:14 +0000 (0:00:02.556) 0:13:52.448 ******* 2026-02-15 06:08:14.757696 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:08:14.757707 | orchestrator | 2026-02-15 06:08:14.757718 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-15 06:08:14.757729 | orchestrator | Sunday 15 February 2026 06:07:16 +0000 (0:00:02.002) 0:13:54.451 ******* 2026-02-15 06:08:14.757739 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:08:14.757750 | orchestrator | 2026-02-15 06:08:14.757761 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-15 06:08:14.757772 | orchestrator | Sunday 15 February 2026 06:07:18 +0000 (0:00:02.589) 0:13:57.041 ******* 2026-02-15 06:08:14.757783 | orchestrator | changed: [testbed-node-1] 2026-02-15 06:08:14.757794 | orchestrator | 2026-02-15 06:08:14.757807 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-15 06:08:14.757819 | orchestrator | Sunday 15 February 2026 06:07:22 +0000 (0:00:03.105) 0:14:00.147 ******* 2026-02-15 06:08:14.757832 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-1 2026-02-15 06:08:14.757846 | orchestrator | 2026-02-15 06:08:14.757859 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-15 06:08:14.757871 | orchestrator | Sunday 15 February 2026 06:07:23 +0000 (0:00:01.174) 0:14:01.321 ******* 2026-02-15 06:08:14.757884 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-15 06:08:14.757897 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:08:14.757909 | orchestrator | 2026-02-15 06:08:14.757923 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-15 06:08:14.757935 | orchestrator | Sunday 15 February 2026 06:07:46 +0000 (0:00:22.951) 0:14:24.272 ******* 2026-02-15 06:08:14.757947 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:08:14.757960 | orchestrator | 2026-02-15 06:08:14.757978 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-15 06:08:14.757996 | orchestrator | Sunday 15 February 2026 06:07:48 +0000 (0:00:02.651) 0:14:26.924 ******* 2026-02-15 06:08:14.758079 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.758103 | orchestrator | 2026-02-15 06:08:14.758123 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-15 06:08:14.758136 | orchestrator | Sunday 15 February 2026 06:07:49 +0000 (0:00:00.812) 0:14:27.737 ******* 2026-02-15 06:08:14.758151 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-15 06:08:14.758168 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-15 06:08:14.758179 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-15 06:08:14.758190 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-15 06:08:14.758243 | orchestrator | ok: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-15 06:08:14.758257 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}])  2026-02-15 06:08:14.758270 | orchestrator | 2026-02-15 06:08:14.758281 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-15 06:08:14.758292 | orchestrator | Sunday 15 February 2026 06:07:58 +0000 (0:00:09.268) 0:14:37.005 ******* 2026-02-15 06:08:14.758303 | orchestrator | changed: [testbed-node-1] 2026-02-15 06:08:14.758314 | orchestrator | 2026-02-15 06:08:14.758324 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:08:14.758335 | orchestrator | Sunday 15 February 2026 06:08:01 +0000 (0:00:02.152) 0:14:39.158 ******* 2026-02-15 06:08:14.758345 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:08:14.758356 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-15 06:08:14.758372 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-15 06:08:14.758417 | orchestrator | 2026-02-15 06:08:14.758435 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:08:14.758452 | orchestrator | Sunday 15 February 2026 06:08:02 +0000 (0:00:01.852) 0:14:41.010 ******* 2026-02-15 06:08:14.758472 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 06:08:14.758492 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 06:08:14.758508 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 06:08:14.758519 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.758530 | orchestrator | 2026-02-15 06:08:14.758540 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-15 06:08:14.758551 | orchestrator | Sunday 15 February 2026 06:08:03 +0000 (0:00:01.049) 0:14:42.060 ******* 2026-02-15 06:08:14.758562 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.758572 | orchestrator | 2026-02-15 06:08:14.758583 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-15 06:08:14.758594 | orchestrator | Sunday 15 February 2026 06:08:04 +0000 (0:00:00.808) 0:14:42.868 ******* 2026-02-15 06:08:14.758604 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:08:14.758615 | orchestrator | 2026-02-15 06:08:14.758625 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 06:08:14.758636 | orchestrator | Sunday 15 February 2026 06:08:07 +0000 (0:00:02.389) 0:14:45.258 ******* 2026-02-15 06:08:14.758647 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.758657 | orchestrator | 2026-02-15 06:08:14.758668 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-15 06:08:14.758678 | orchestrator | Sunday 15 February 2026 06:08:07 +0000 (0:00:00.795) 0:14:46.053 ******* 2026-02-15 06:08:14.758689 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.758700 | orchestrator | 2026-02-15 06:08:14.758710 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-15 06:08:14.758721 | orchestrator | Sunday 15 February 2026 06:08:08 +0000 (0:00:00.770) 0:14:46.823 ******* 2026-02-15 06:08:14.758731 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.758752 | orchestrator | 2026-02-15 06:08:14.758763 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-15 06:08:14.758774 | orchestrator | Sunday 15 February 2026 06:08:09 +0000 (0:00:00.775) 0:14:47.599 ******* 2026-02-15 06:08:14.758784 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.758795 | orchestrator | 2026-02-15 06:08:14.758805 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-15 06:08:14.758816 | orchestrator | Sunday 15 February 2026 06:08:10 +0000 (0:00:00.780) 0:14:48.379 ******* 2026-02-15 06:08:14.758826 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.758841 | orchestrator | 2026-02-15 06:08:14.758859 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-15 06:08:14.758875 | orchestrator | Sunday 15 February 2026 06:08:11 +0000 (0:00:00.778) 0:14:49.158 ******* 2026-02-15 06:08:14.758893 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.758912 | orchestrator | 2026-02-15 06:08:14.758930 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-15 06:08:14.758945 | orchestrator | Sunday 15 February 2026 06:08:11 +0000 (0:00:00.774) 0:14:49.932 ******* 2026-02-15 06:08:14.758956 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:08:14.758966 | orchestrator | 2026-02-15 06:08:14.758977 | orchestrator | PLAY [Upgrade ceph mon cluster] ************************************************ 2026-02-15 06:08:14.758987 | orchestrator | 2026-02-15 06:08:14.758998 | orchestrator | TASK [Remove ceph aliases] ***************************************************** 2026-02-15 06:08:14.759009 | orchestrator | Sunday 15 February 2026 06:08:12 +0000 (0:00:00.973) 0:14:50.906 ******* 2026-02-15 06:08:14.759019 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:14.759030 | orchestrator | 2026-02-15 06:08:14.759041 | orchestrator | TASK [Set mon_host_count] ****************************************************** 2026-02-15 06:08:14.759052 | orchestrator | Sunday 15 February 2026 06:08:13 +0000 (0:00:01.139) 0:14:52.045 ******* 2026-02-15 06:08:14.759062 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:14.759073 | orchestrator | 2026-02-15 06:08:14.759084 | orchestrator | TASK [Fail when less than three monitors] ************************************** 2026-02-15 06:08:14.759103 | orchestrator | Sunday 15 February 2026 06:08:14 +0000 (0:00:00.800) 0:14:52.846 ******* 2026-02-15 06:08:39.799733 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:39.799812 | orchestrator | 2026-02-15 06:08:39.799829 | orchestrator | TASK [Select a running monitor] ************************************************ 2026-02-15 06:08:39.799834 | orchestrator | Sunday 15 February 2026 06:08:15 +0000 (0:00:00.785) 0:14:53.631 ******* 2026-02-15 06:08:39.799838 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:39.799843 | orchestrator | 2026-02-15 06:08:39.799847 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:08:39.799851 | orchestrator | Sunday 15 February 2026 06:08:16 +0000 (0:00:00.793) 0:14:54.425 ******* 2026-02-15 06:08:39.799855 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-15 06:08:39.799859 | orchestrator | 2026-02-15 06:08:39.799862 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:08:39.799866 | orchestrator | Sunday 15 February 2026 06:08:17 +0000 (0:00:01.145) 0:14:55.571 ******* 2026-02-15 06:08:39.799870 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:39.799874 | orchestrator | 2026-02-15 06:08:39.799878 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:08:39.799881 | orchestrator | Sunday 15 February 2026 06:08:19 +0000 (0:00:01.541) 0:14:57.113 ******* 2026-02-15 06:08:39.799885 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:39.799889 | orchestrator | 2026-02-15 06:08:39.799892 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:08:39.799896 | orchestrator | Sunday 15 February 2026 06:08:20 +0000 (0:00:01.192) 0:14:58.305 ******* 2026-02-15 06:08:39.799900 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:39.799904 | orchestrator | 2026-02-15 06:08:39.799907 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:08:39.799926 | orchestrator | Sunday 15 February 2026 06:08:21 +0000 (0:00:01.505) 0:14:59.810 ******* 2026-02-15 06:08:39.799930 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:39.799934 | orchestrator | 2026-02-15 06:08:39.799937 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:08:39.799941 | orchestrator | Sunday 15 February 2026 06:08:22 +0000 (0:00:01.191) 0:15:01.001 ******* 2026-02-15 06:08:39.799945 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:39.799948 | orchestrator | 2026-02-15 06:08:39.799952 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:08:39.799956 | orchestrator | Sunday 15 February 2026 06:08:24 +0000 (0:00:01.207) 0:15:02.209 ******* 2026-02-15 06:08:39.799960 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:39.799964 | orchestrator | 2026-02-15 06:08:39.799968 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:08:39.799972 | orchestrator | Sunday 15 February 2026 06:08:25 +0000 (0:00:01.174) 0:15:03.384 ******* 2026-02-15 06:08:39.799976 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:39.799979 | orchestrator | 2026-02-15 06:08:39.799983 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:08:39.799987 | orchestrator | Sunday 15 February 2026 06:08:26 +0000 (0:00:01.194) 0:15:04.579 ******* 2026-02-15 06:08:39.799991 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:39.799994 | orchestrator | 2026-02-15 06:08:39.799998 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:08:39.800002 | orchestrator | Sunday 15 February 2026 06:08:27 +0000 (0:00:01.135) 0:15:05.715 ******* 2026-02-15 06:08:39.800006 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:08:39.800009 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:08:39.800013 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:08:39.800017 | orchestrator | 2026-02-15 06:08:39.800021 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:08:39.800025 | orchestrator | Sunday 15 February 2026 06:08:29 +0000 (0:00:02.069) 0:15:07.784 ******* 2026-02-15 06:08:39.800028 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:39.800032 | orchestrator | 2026-02-15 06:08:39.800036 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:08:39.800040 | orchestrator | Sunday 15 February 2026 06:08:30 +0000 (0:00:01.240) 0:15:09.025 ******* 2026-02-15 06:08:39.800043 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:08:39.800047 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:08:39.800051 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:08:39.800054 | orchestrator | 2026-02-15 06:08:39.800058 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:08:39.800062 | orchestrator | Sunday 15 February 2026 06:08:34 +0000 (0:00:03.257) 0:15:12.282 ******* 2026-02-15 06:08:39.800066 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 06:08:39.800069 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 06:08:39.800073 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 06:08:39.800077 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:39.800081 | orchestrator | 2026-02-15 06:08:39.800084 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:08:39.800088 | orchestrator | Sunday 15 February 2026 06:08:35 +0000 (0:00:01.486) 0:15:13.769 ******* 2026-02-15 06:08:39.800094 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:08:39.800100 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:08:39.800122 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:08:39.800126 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:39.800130 | orchestrator | 2026-02-15 06:08:39.800134 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:08:39.800138 | orchestrator | Sunday 15 February 2026 06:08:37 +0000 (0:00:01.662) 0:15:15.431 ******* 2026-02-15 06:08:39.800143 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:39.800148 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:39.800153 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:39.800156 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:39.800160 | orchestrator | 2026-02-15 06:08:39.800164 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:08:39.800168 | orchestrator | Sunday 15 February 2026 06:08:38 +0000 (0:00:01.250) 0:15:16.682 ******* 2026-02-15 06:08:39.800173 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:08:31.793083', 'end': '2026-02-15 06:08:31.850896', 'delta': '0:00:00.057813', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:08:39.800180 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:08:32.380177', 'end': '2026-02-15 06:08:32.427397', 'delta': '0:00:00.047220', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:08:39.800187 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '9cffadff9441', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:08:32.955148', 'end': '2026-02-15 06:08:33.007775', 'delta': '0:00:00.052627', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['9cffadff9441'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:08:58.534217 | orchestrator | 2026-02-15 06:08:58.534397 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:08:58.534416 | orchestrator | Sunday 15 February 2026 06:08:39 +0000 (0:00:01.207) 0:15:17.889 ******* 2026-02-15 06:08:58.534427 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:58.534438 | orchestrator | 2026-02-15 06:08:58.534448 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:08:58.534459 | orchestrator | Sunday 15 February 2026 06:08:41 +0000 (0:00:01.382) 0:15:19.271 ******* 2026-02-15 06:08:58.534469 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:58.534480 | orchestrator | 2026-02-15 06:08:58.534491 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:08:58.534501 | orchestrator | Sunday 15 February 2026 06:08:42 +0000 (0:00:01.246) 0:15:20.518 ******* 2026-02-15 06:08:58.534511 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:58.534521 | orchestrator | 2026-02-15 06:08:58.534531 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:08:58.534541 | orchestrator | Sunday 15 February 2026 06:08:43 +0000 (0:00:01.179) 0:15:21.697 ******* 2026-02-15 06:08:58.534551 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:08:58.534562 | orchestrator | 2026-02-15 06:08:58.534572 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:08:58.534582 | orchestrator | Sunday 15 February 2026 06:08:45 +0000 (0:00:01.998) 0:15:23.695 ******* 2026-02-15 06:08:58.534592 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:08:58.534602 | orchestrator | 2026-02-15 06:08:58.534612 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:08:58.534622 | orchestrator | Sunday 15 February 2026 06:08:46 +0000 (0:00:01.153) 0:15:24.849 ******* 2026-02-15 06:08:58.534632 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:58.534643 | orchestrator | 2026-02-15 06:08:58.534652 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:08:58.534663 | orchestrator | Sunday 15 February 2026 06:08:47 +0000 (0:00:01.124) 0:15:25.973 ******* 2026-02-15 06:08:58.534672 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:58.534682 | orchestrator | 2026-02-15 06:08:58.534692 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:08:58.534702 | orchestrator | Sunday 15 February 2026 06:08:49 +0000 (0:00:01.248) 0:15:27.222 ******* 2026-02-15 06:08:58.534713 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:58.534723 | orchestrator | 2026-02-15 06:08:58.534733 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:08:58.534746 | orchestrator | Sunday 15 February 2026 06:08:50 +0000 (0:00:01.118) 0:15:28.341 ******* 2026-02-15 06:08:58.534757 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:58.534768 | orchestrator | 2026-02-15 06:08:58.534779 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:08:58.534791 | orchestrator | Sunday 15 February 2026 06:08:51 +0000 (0:00:01.210) 0:15:29.551 ******* 2026-02-15 06:08:58.534803 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:58.534814 | orchestrator | 2026-02-15 06:08:58.534825 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:08:58.534837 | orchestrator | Sunday 15 February 2026 06:08:52 +0000 (0:00:01.128) 0:15:30.680 ******* 2026-02-15 06:08:58.534871 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:58.534883 | orchestrator | 2026-02-15 06:08:58.534895 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:08:58.534906 | orchestrator | Sunday 15 February 2026 06:08:53 +0000 (0:00:01.141) 0:15:31.821 ******* 2026-02-15 06:08:58.534918 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:58.534930 | orchestrator | 2026-02-15 06:08:58.534941 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:08:58.534954 | orchestrator | Sunday 15 February 2026 06:08:54 +0000 (0:00:01.137) 0:15:32.959 ******* 2026-02-15 06:08:58.534965 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:58.534976 | orchestrator | 2026-02-15 06:08:58.534988 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:08:58.535000 | orchestrator | Sunday 15 February 2026 06:08:56 +0000 (0:00:01.198) 0:15:34.158 ******* 2026-02-15 06:08:58.535012 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:58.535023 | orchestrator | 2026-02-15 06:08:58.535035 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:08:58.535046 | orchestrator | Sunday 15 February 2026 06:08:57 +0000 (0:00:01.129) 0:15:35.288 ******* 2026-02-15 06:08:58.535060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:08:58.535074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:08:58.535111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:08:58.535124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:08:58.535137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:08:58.535147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:08:58.535165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:08:58.535193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1976e1cf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:08:59.893857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:08:59.893962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:08:59.893978 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:08:59.893991 | orchestrator | 2026-02-15 06:08:59.894003 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:08:59.894015 | orchestrator | Sunday 15 February 2026 06:08:58 +0000 (0:00:01.320) 0:15:36.608 ******* 2026-02-15 06:08:59.894100 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:59.894140 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:59.894152 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:59.894165 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:59.894209 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:59.894222 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:59.894233 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:59.894255 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1976e1cf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:08:59.894326 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:09:35.946325 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:09:35.946445 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.946486 | orchestrator | 2026-02-15 06:09:35.946499 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:09:35.946512 | orchestrator | Sunday 15 February 2026 06:08:59 +0000 (0:00:01.381) 0:15:37.990 ******* 2026-02-15 06:09:35.946523 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:09:35.946535 | orchestrator | 2026-02-15 06:09:35.946546 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:09:35.946557 | orchestrator | Sunday 15 February 2026 06:09:01 +0000 (0:00:01.530) 0:15:39.521 ******* 2026-02-15 06:09:35.946567 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:09:35.946578 | orchestrator | 2026-02-15 06:09:35.946589 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:09:35.946600 | orchestrator | Sunday 15 February 2026 06:09:02 +0000 (0:00:01.173) 0:15:40.694 ******* 2026-02-15 06:09:35.946611 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:09:35.946621 | orchestrator | 2026-02-15 06:09:35.946632 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:09:35.946643 | orchestrator | Sunday 15 February 2026 06:09:04 +0000 (0:00:01.534) 0:15:42.229 ******* 2026-02-15 06:09:35.946654 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.946664 | orchestrator | 2026-02-15 06:09:35.946675 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:09:35.946685 | orchestrator | Sunday 15 February 2026 06:09:05 +0000 (0:00:01.187) 0:15:43.416 ******* 2026-02-15 06:09:35.946696 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.946707 | orchestrator | 2026-02-15 06:09:35.946717 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:09:35.946728 | orchestrator | Sunday 15 February 2026 06:09:06 +0000 (0:00:01.265) 0:15:44.682 ******* 2026-02-15 06:09:35.946739 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.946749 | orchestrator | 2026-02-15 06:09:35.946760 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:09:35.946771 | orchestrator | Sunday 15 February 2026 06:09:07 +0000 (0:00:01.177) 0:15:45.859 ******* 2026-02-15 06:09:35.946782 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-15 06:09:35.946793 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-15 06:09:35.946804 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:09:35.946815 | orchestrator | 2026-02-15 06:09:35.946825 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:09:35.946837 | orchestrator | Sunday 15 February 2026 06:09:09 +0000 (0:00:01.761) 0:15:47.621 ******* 2026-02-15 06:09:35.946847 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 06:09:35.946858 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 06:09:35.946869 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 06:09:35.946880 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.946891 | orchestrator | 2026-02-15 06:09:35.946902 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:09:35.946912 | orchestrator | Sunday 15 February 2026 06:09:10 +0000 (0:00:01.199) 0:15:48.821 ******* 2026-02-15 06:09:35.946923 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.946934 | orchestrator | 2026-02-15 06:09:35.946945 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:09:35.946955 | orchestrator | Sunday 15 February 2026 06:09:11 +0000 (0:00:01.162) 0:15:49.984 ******* 2026-02-15 06:09:35.946966 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:09:35.946977 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:09:35.946988 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:09:35.946999 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:09:35.947009 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:09:35.947028 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:09:35.947038 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:09:35.947049 | orchestrator | 2026-02-15 06:09:35.947060 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:09:35.947071 | orchestrator | Sunday 15 February 2026 06:09:13 +0000 (0:00:01.884) 0:15:51.868 ******* 2026-02-15 06:09:35.947118 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:09:35.947131 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:09:35.947142 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:09:35.947153 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:09:35.947181 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:09:35.947193 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:09:35.947203 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:09:35.947214 | orchestrator | 2026-02-15 06:09:35.947225 | orchestrator | TASK [Get ceph cluster status] ************************************************* 2026-02-15 06:09:35.947235 | orchestrator | Sunday 15 February 2026 06:09:16 +0000 (0:00:02.269) 0:15:54.138 ******* 2026-02-15 06:09:35.947246 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.947257 | orchestrator | 2026-02-15 06:09:35.947276 | orchestrator | TASK [Display ceph health detail] ********************************************** 2026-02-15 06:09:35.947294 | orchestrator | Sunday 15 February 2026 06:09:16 +0000 (0:00:00.941) 0:15:55.080 ******* 2026-02-15 06:09:35.947313 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.947333 | orchestrator | 2026-02-15 06:09:35.947353 | orchestrator | TASK [Fail if cluster isn't in an acceptable state] **************************** 2026-02-15 06:09:35.947370 | orchestrator | Sunday 15 February 2026 06:09:17 +0000 (0:00:00.902) 0:15:55.982 ******* 2026-02-15 06:09:35.947389 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.947408 | orchestrator | 2026-02-15 06:09:35.947427 | orchestrator | TASK [Get the ceph quorum status] ********************************************** 2026-02-15 06:09:35.947447 | orchestrator | Sunday 15 February 2026 06:09:18 +0000 (0:00:00.876) 0:15:56.859 ******* 2026-02-15 06:09:35.947469 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.947489 | orchestrator | 2026-02-15 06:09:35.947510 | orchestrator | TASK [Fail if the cluster quorum isn't in an acceptable state] ***************** 2026-02-15 06:09:35.947531 | orchestrator | Sunday 15 February 2026 06:09:19 +0000 (0:00:00.926) 0:15:57.786 ******* 2026-02-15 06:09:35.947551 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.947570 | orchestrator | 2026-02-15 06:09:35.947591 | orchestrator | TASK [Ensure /var/lib/ceph/bootstrap-rbd-mirror is present] ******************** 2026-02-15 06:09:35.947613 | orchestrator | Sunday 15 February 2026 06:09:20 +0000 (0:00:00.843) 0:15:58.630 ******* 2026-02-15 06:09:35.947634 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 06:09:35.947656 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 06:09:35.947677 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 06:09:35.947698 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.947718 | orchestrator | 2026-02-15 06:09:35.947739 | orchestrator | TASK [Create potentially missing keys (rbd and rbd-mirror)] ******************** 2026-02-15 06:09:35.947759 | orchestrator | Sunday 15 February 2026 06:09:21 +0000 (0:00:01.406) 0:16:00.036 ******* 2026-02-15 06:09:35.947779 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-0'])  2026-02-15 06:09:35.947798 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-1'])  2026-02-15 06:09:35.947817 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd', 'testbed-node-2'])  2026-02-15 06:09:35.947849 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-0'])  2026-02-15 06:09:35.947869 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-1'])  2026-02-15 06:09:35.947890 | orchestrator | skipping: [testbed-node-2] => (item=['bootstrap-rbd-mirror', 'testbed-node-2'])  2026-02-15 06:09:35.947909 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.947929 | orchestrator | 2026-02-15 06:09:35.947949 | orchestrator | TASK [Stop ceph mon] *********************************************************** 2026-02-15 06:09:35.947968 | orchestrator | Sunday 15 February 2026 06:09:23 +0000 (0:00:01.687) 0:16:01.724 ******* 2026-02-15 06:09:35.947988 | orchestrator | changed: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:09:35.948007 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:09:35.948028 | orchestrator | 2026-02-15 06:09:35.948048 | orchestrator | TASK [Mask the mgr service] **************************************************** 2026-02-15 06:09:35.948068 | orchestrator | Sunday 15 February 2026 06:09:27 +0000 (0:00:03.917) 0:16:05.642 ******* 2026-02-15 06:09:35.948112 | orchestrator | changed: [testbed-node-2] 2026-02-15 06:09:35.948132 | orchestrator | 2026-02-15 06:09:35.948151 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:09:35.948169 | orchestrator | Sunday 15 February 2026 06:09:29 +0000 (0:00:02.161) 0:16:07.804 ******* 2026-02-15 06:09:35.948189 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-15 06:09:35.948208 | orchestrator | 2026-02-15 06:09:35.948243 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:09:35.948262 | orchestrator | Sunday 15 February 2026 06:09:30 +0000 (0:00:01.188) 0:16:08.992 ******* 2026-02-15 06:09:35.948280 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-15 06:09:35.948300 | orchestrator | 2026-02-15 06:09:35.948318 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:09:35.948336 | orchestrator | Sunday 15 February 2026 06:09:32 +0000 (0:00:01.129) 0:16:10.122 ******* 2026-02-15 06:09:35.948354 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:09:35.948372 | orchestrator | 2026-02-15 06:09:35.948392 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:09:35.948410 | orchestrator | Sunday 15 February 2026 06:09:33 +0000 (0:00:01.542) 0:16:11.665 ******* 2026-02-15 06:09:35.948430 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.948449 | orchestrator | 2026-02-15 06:09:35.948478 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:09:35.948499 | orchestrator | Sunday 15 February 2026 06:09:34 +0000 (0:00:01.132) 0:16:12.798 ******* 2026-02-15 06:09:35.948520 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:09:35.948539 | orchestrator | 2026-02-15 06:09:35.948550 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:09:35.948571 | orchestrator | Sunday 15 February 2026 06:09:35 +0000 (0:00:01.235) 0:16:14.033 ******* 2026-02-15 06:10:18.332420 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.332555 | orchestrator | 2026-02-15 06:10:18.332574 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:10:18.332587 | orchestrator | Sunday 15 February 2026 06:09:37 +0000 (0:00:01.155) 0:16:15.189 ******* 2026-02-15 06:10:18.332598 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.332610 | orchestrator | 2026-02-15 06:10:18.332622 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:10:18.332633 | orchestrator | Sunday 15 February 2026 06:09:38 +0000 (0:00:01.556) 0:16:16.745 ******* 2026-02-15 06:10:18.332644 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.332654 | orchestrator | 2026-02-15 06:10:18.332665 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:10:18.332676 | orchestrator | Sunday 15 February 2026 06:09:39 +0000 (0:00:01.125) 0:16:17.871 ******* 2026-02-15 06:10:18.332709 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.332721 | orchestrator | 2026-02-15 06:10:18.332732 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:10:18.332743 | orchestrator | Sunday 15 February 2026 06:09:40 +0000 (0:00:01.146) 0:16:19.017 ******* 2026-02-15 06:10:18.332754 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.332764 | orchestrator | 2026-02-15 06:10:18.332775 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:10:18.332786 | orchestrator | Sunday 15 February 2026 06:09:42 +0000 (0:00:01.922) 0:16:20.939 ******* 2026-02-15 06:10:18.332796 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.332807 | orchestrator | 2026-02-15 06:10:18.332817 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:10:18.332828 | orchestrator | Sunday 15 February 2026 06:09:44 +0000 (0:00:01.565) 0:16:22.504 ******* 2026-02-15 06:10:18.332839 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.332849 | orchestrator | 2026-02-15 06:10:18.332860 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:10:18.332871 | orchestrator | Sunday 15 February 2026 06:09:45 +0000 (0:00:00.753) 0:16:23.258 ******* 2026-02-15 06:10:18.332882 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.332921 | orchestrator | 2026-02-15 06:10:18.332941 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:10:18.332959 | orchestrator | Sunday 15 February 2026 06:09:45 +0000 (0:00:00.829) 0:16:24.087 ******* 2026-02-15 06:10:18.332972 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.332985 | orchestrator | 2026-02-15 06:10:18.332997 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:10:18.333010 | orchestrator | Sunday 15 February 2026 06:09:46 +0000 (0:00:00.797) 0:16:24.885 ******* 2026-02-15 06:10:18.333022 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333034 | orchestrator | 2026-02-15 06:10:18.333047 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:10:18.333060 | orchestrator | Sunday 15 February 2026 06:09:47 +0000 (0:00:00.781) 0:16:25.666 ******* 2026-02-15 06:10:18.333073 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333085 | orchestrator | 2026-02-15 06:10:18.333097 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:10:18.333109 | orchestrator | Sunday 15 February 2026 06:09:48 +0000 (0:00:00.818) 0:16:26.485 ******* 2026-02-15 06:10:18.333121 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333133 | orchestrator | 2026-02-15 06:10:18.333145 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:10:18.333157 | orchestrator | Sunday 15 February 2026 06:09:49 +0000 (0:00:00.827) 0:16:27.313 ******* 2026-02-15 06:10:18.333169 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333182 | orchestrator | 2026-02-15 06:10:18.333194 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:10:18.333206 | orchestrator | Sunday 15 February 2026 06:09:49 +0000 (0:00:00.778) 0:16:28.092 ******* 2026-02-15 06:10:18.333218 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.333230 | orchestrator | 2026-02-15 06:10:18.333241 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:10:18.333254 | orchestrator | Sunday 15 February 2026 06:09:50 +0000 (0:00:00.857) 0:16:28.949 ******* 2026-02-15 06:10:18.333266 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.333279 | orchestrator | 2026-02-15 06:10:18.333290 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:10:18.333301 | orchestrator | Sunday 15 February 2026 06:09:51 +0000 (0:00:00.803) 0:16:29.753 ******* 2026-02-15 06:10:18.333312 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.333322 | orchestrator | 2026-02-15 06:10:18.333333 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:10:18.333343 | orchestrator | Sunday 15 February 2026 06:09:52 +0000 (0:00:00.817) 0:16:30.570 ******* 2026-02-15 06:10:18.333363 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333374 | orchestrator | 2026-02-15 06:10:18.333385 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:10:18.333395 | orchestrator | Sunday 15 February 2026 06:09:53 +0000 (0:00:00.788) 0:16:31.359 ******* 2026-02-15 06:10:18.333406 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333417 | orchestrator | 2026-02-15 06:10:18.333428 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:10:18.333439 | orchestrator | Sunday 15 February 2026 06:09:54 +0000 (0:00:00.781) 0:16:32.140 ******* 2026-02-15 06:10:18.333450 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333460 | orchestrator | 2026-02-15 06:10:18.333471 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:10:18.333496 | orchestrator | Sunday 15 February 2026 06:09:54 +0000 (0:00:00.806) 0:16:32.946 ******* 2026-02-15 06:10:18.333507 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333518 | orchestrator | 2026-02-15 06:10:18.333529 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:10:18.333539 | orchestrator | Sunday 15 February 2026 06:09:55 +0000 (0:00:00.827) 0:16:33.774 ******* 2026-02-15 06:10:18.333550 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333561 | orchestrator | 2026-02-15 06:10:18.333590 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:10:18.333602 | orchestrator | Sunday 15 February 2026 06:09:56 +0000 (0:00:00.763) 0:16:34.538 ******* 2026-02-15 06:10:18.333617 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333635 | orchestrator | 2026-02-15 06:10:18.333653 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:10:18.333671 | orchestrator | Sunday 15 February 2026 06:09:57 +0000 (0:00:00.784) 0:16:35.322 ******* 2026-02-15 06:10:18.333688 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333706 | orchestrator | 2026-02-15 06:10:18.333725 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:10:18.333745 | orchestrator | Sunday 15 February 2026 06:09:58 +0000 (0:00:00.796) 0:16:36.119 ******* 2026-02-15 06:10:18.333766 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333784 | orchestrator | 2026-02-15 06:10:18.333797 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:10:18.333808 | orchestrator | Sunday 15 February 2026 06:09:58 +0000 (0:00:00.752) 0:16:36.871 ******* 2026-02-15 06:10:18.333818 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333829 | orchestrator | 2026-02-15 06:10:18.333839 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:10:18.333850 | orchestrator | Sunday 15 February 2026 06:09:59 +0000 (0:00:00.799) 0:16:37.671 ******* 2026-02-15 06:10:18.333860 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333871 | orchestrator | 2026-02-15 06:10:18.333881 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:10:18.333917 | orchestrator | Sunday 15 February 2026 06:10:00 +0000 (0:00:00.775) 0:16:38.446 ******* 2026-02-15 06:10:18.333938 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.333956 | orchestrator | 2026-02-15 06:10:18.333974 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:10:18.333991 | orchestrator | Sunday 15 February 2026 06:10:01 +0000 (0:00:00.775) 0:16:39.222 ******* 2026-02-15 06:10:18.334009 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.334094 | orchestrator | 2026-02-15 06:10:18.334107 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:10:18.334118 | orchestrator | Sunday 15 February 2026 06:10:01 +0000 (0:00:00.805) 0:16:40.027 ******* 2026-02-15 06:10:18.334128 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.334139 | orchestrator | 2026-02-15 06:10:18.334150 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:10:18.334161 | orchestrator | Sunday 15 February 2026 06:10:03 +0000 (0:00:01.772) 0:16:41.800 ******* 2026-02-15 06:10:18.334181 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.334191 | orchestrator | 2026-02-15 06:10:18.334202 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:10:18.334213 | orchestrator | Sunday 15 February 2026 06:10:05 +0000 (0:00:02.235) 0:16:44.036 ******* 2026-02-15 06:10:18.334223 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-15 06:10:18.334235 | orchestrator | 2026-02-15 06:10:18.334246 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:10:18.334256 | orchestrator | Sunday 15 February 2026 06:10:07 +0000 (0:00:01.156) 0:16:45.192 ******* 2026-02-15 06:10:18.334267 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.334278 | orchestrator | 2026-02-15 06:10:18.334289 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:10:18.334300 | orchestrator | Sunday 15 February 2026 06:10:08 +0000 (0:00:01.147) 0:16:46.339 ******* 2026-02-15 06:10:18.334310 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.334321 | orchestrator | 2026-02-15 06:10:18.334332 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:10:18.334343 | orchestrator | Sunday 15 February 2026 06:10:09 +0000 (0:00:01.126) 0:16:47.466 ******* 2026-02-15 06:10:18.334353 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:10:18.334364 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:10:18.334379 | orchestrator | 2026-02-15 06:10:18.334397 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:10:18.334416 | orchestrator | Sunday 15 February 2026 06:10:11 +0000 (0:00:01.857) 0:16:49.323 ******* 2026-02-15 06:10:18.334433 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.334451 | orchestrator | 2026-02-15 06:10:18.334470 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:10:18.334489 | orchestrator | Sunday 15 February 2026 06:10:12 +0000 (0:00:01.462) 0:16:50.785 ******* 2026-02-15 06:10:18.334508 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.334526 | orchestrator | 2026-02-15 06:10:18.334539 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:10:18.334550 | orchestrator | Sunday 15 February 2026 06:10:13 +0000 (0:00:01.116) 0:16:51.902 ******* 2026-02-15 06:10:18.334560 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.334571 | orchestrator | 2026-02-15 06:10:18.334582 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:10:18.334592 | orchestrator | Sunday 15 February 2026 06:10:14 +0000 (0:00:00.812) 0:16:52.714 ******* 2026-02-15 06:10:18.334602 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:18.334613 | orchestrator | 2026-02-15 06:10:18.334623 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:10:18.334634 | orchestrator | Sunday 15 February 2026 06:10:15 +0000 (0:00:00.807) 0:16:53.522 ******* 2026-02-15 06:10:18.334651 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-15 06:10:18.334663 | orchestrator | 2026-02-15 06:10:18.334673 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:10:18.334684 | orchestrator | Sunday 15 February 2026 06:10:16 +0000 (0:00:01.140) 0:16:54.662 ******* 2026-02-15 06:10:18.334694 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:18.334705 | orchestrator | 2026-02-15 06:10:18.334716 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:10:18.334737 | orchestrator | Sunday 15 February 2026 06:10:18 +0000 (0:00:01.758) 0:16:56.421 ******* 2026-02-15 06:10:57.982661 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:10:57.982846 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:10:57.982875 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:10:57.982912 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.982927 | orchestrator | 2026-02-15 06:10:57.982938 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:10:57.982949 | orchestrator | Sunday 15 February 2026 06:10:19 +0000 (0:00:01.173) 0:16:57.594 ******* 2026-02-15 06:10:57.982960 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.982971 | orchestrator | 2026-02-15 06:10:57.982982 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:10:57.982992 | orchestrator | Sunday 15 February 2026 06:10:20 +0000 (0:00:01.140) 0:16:58.735 ******* 2026-02-15 06:10:57.983003 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983014 | orchestrator | 2026-02-15 06:10:57.983025 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:10:57.983035 | orchestrator | Sunday 15 February 2026 06:10:21 +0000 (0:00:01.164) 0:16:59.899 ******* 2026-02-15 06:10:57.983046 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983057 | orchestrator | 2026-02-15 06:10:57.983067 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:10:57.983078 | orchestrator | Sunday 15 February 2026 06:10:22 +0000 (0:00:01.156) 0:17:01.056 ******* 2026-02-15 06:10:57.983088 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983099 | orchestrator | 2026-02-15 06:10:57.983110 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:10:57.983120 | orchestrator | Sunday 15 February 2026 06:10:24 +0000 (0:00:01.185) 0:17:02.241 ******* 2026-02-15 06:10:57.983131 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983141 | orchestrator | 2026-02-15 06:10:57.983152 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:10:57.983163 | orchestrator | Sunday 15 February 2026 06:10:24 +0000 (0:00:00.826) 0:17:03.067 ******* 2026-02-15 06:10:57.983173 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:57.983187 | orchestrator | 2026-02-15 06:10:57.983200 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:10:57.983212 | orchestrator | Sunday 15 February 2026 06:10:27 +0000 (0:00:02.216) 0:17:05.284 ******* 2026-02-15 06:10:57.983225 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:57.983237 | orchestrator | 2026-02-15 06:10:57.983250 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:10:57.983262 | orchestrator | Sunday 15 February 2026 06:10:27 +0000 (0:00:00.802) 0:17:06.087 ******* 2026-02-15 06:10:57.983274 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-15 06:10:57.983285 | orchestrator | 2026-02-15 06:10:57.983295 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:10:57.983306 | orchestrator | Sunday 15 February 2026 06:10:29 +0000 (0:00:01.104) 0:17:07.192 ******* 2026-02-15 06:10:57.983316 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983327 | orchestrator | 2026-02-15 06:10:57.983338 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:10:57.983348 | orchestrator | Sunday 15 February 2026 06:10:30 +0000 (0:00:01.157) 0:17:08.350 ******* 2026-02-15 06:10:57.983359 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983369 | orchestrator | 2026-02-15 06:10:57.983380 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:10:57.983391 | orchestrator | Sunday 15 February 2026 06:10:31 +0000 (0:00:01.166) 0:17:09.517 ******* 2026-02-15 06:10:57.983402 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983413 | orchestrator | 2026-02-15 06:10:57.983423 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:10:57.983434 | orchestrator | Sunday 15 February 2026 06:10:32 +0000 (0:00:01.143) 0:17:10.661 ******* 2026-02-15 06:10:57.983445 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983455 | orchestrator | 2026-02-15 06:10:57.983466 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:10:57.983485 | orchestrator | Sunday 15 February 2026 06:10:33 +0000 (0:00:01.261) 0:17:11.923 ******* 2026-02-15 06:10:57.983496 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983506 | orchestrator | 2026-02-15 06:10:57.983517 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:10:57.983528 | orchestrator | Sunday 15 February 2026 06:10:35 +0000 (0:00:01.181) 0:17:13.104 ******* 2026-02-15 06:10:57.983538 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983549 | orchestrator | 2026-02-15 06:10:57.983560 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:10:57.983570 | orchestrator | Sunday 15 February 2026 06:10:36 +0000 (0:00:01.192) 0:17:14.297 ******* 2026-02-15 06:10:57.983581 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983592 | orchestrator | 2026-02-15 06:10:57.983602 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:10:57.983613 | orchestrator | Sunday 15 February 2026 06:10:37 +0000 (0:00:01.154) 0:17:15.451 ******* 2026-02-15 06:10:57.983623 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.983634 | orchestrator | 2026-02-15 06:10:57.983644 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:10:57.983670 | orchestrator | Sunday 15 February 2026 06:10:38 +0000 (0:00:01.142) 0:17:16.594 ******* 2026-02-15 06:10:57.983681 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:10:57.983692 | orchestrator | 2026-02-15 06:10:57.983702 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:10:57.983713 | orchestrator | Sunday 15 February 2026 06:10:39 +0000 (0:00:00.802) 0:17:17.396 ******* 2026-02-15 06:10:57.983766 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-15 06:10:57.983779 | orchestrator | 2026-02-15 06:10:57.983791 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:10:57.983820 | orchestrator | Sunday 15 February 2026 06:10:40 +0000 (0:00:01.132) 0:17:18.529 ******* 2026-02-15 06:10:57.983831 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-15 06:10:57.983843 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-15 06:10:57.983853 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-15 06:10:57.983864 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-15 06:10:57.983875 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-15 06:10:57.983885 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-15 06:10:57.983895 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-15 06:10:57.983906 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:10:57.983916 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:10:57.983927 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:10:57.983937 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:10:57.983948 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:10:57.983958 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:10:57.983969 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:10:57.983979 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-15 06:10:57.983990 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-15 06:10:57.984000 | orchestrator | 2026-02-15 06:10:57.984011 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:10:57.984021 | orchestrator | Sunday 15 February 2026 06:10:46 +0000 (0:00:06.337) 0:17:24.866 ******* 2026-02-15 06:10:57.984032 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984042 | orchestrator | 2026-02-15 06:10:57.984053 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:10:57.984063 | orchestrator | Sunday 15 February 2026 06:10:47 +0000 (0:00:00.810) 0:17:25.676 ******* 2026-02-15 06:10:57.984082 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984093 | orchestrator | 2026-02-15 06:10:57.984104 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:10:57.984114 | orchestrator | Sunday 15 February 2026 06:10:48 +0000 (0:00:00.805) 0:17:26.482 ******* 2026-02-15 06:10:57.984125 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984136 | orchestrator | 2026-02-15 06:10:57.984146 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:10:57.984157 | orchestrator | Sunday 15 February 2026 06:10:49 +0000 (0:00:00.835) 0:17:27.318 ******* 2026-02-15 06:10:57.984167 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984178 | orchestrator | 2026-02-15 06:10:57.984188 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:10:57.984199 | orchestrator | Sunday 15 February 2026 06:10:49 +0000 (0:00:00.784) 0:17:28.102 ******* 2026-02-15 06:10:57.984209 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984220 | orchestrator | 2026-02-15 06:10:57.984231 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:10:57.984241 | orchestrator | Sunday 15 February 2026 06:10:50 +0000 (0:00:00.796) 0:17:28.899 ******* 2026-02-15 06:10:57.984252 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984262 | orchestrator | 2026-02-15 06:10:57.984273 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:10:57.984284 | orchestrator | Sunday 15 February 2026 06:10:51 +0000 (0:00:00.758) 0:17:29.657 ******* 2026-02-15 06:10:57.984294 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984305 | orchestrator | 2026-02-15 06:10:57.984316 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:10:57.984326 | orchestrator | Sunday 15 February 2026 06:10:52 +0000 (0:00:00.762) 0:17:30.419 ******* 2026-02-15 06:10:57.984337 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984347 | orchestrator | 2026-02-15 06:10:57.984358 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:10:57.984369 | orchestrator | Sunday 15 February 2026 06:10:53 +0000 (0:00:00.834) 0:17:31.254 ******* 2026-02-15 06:10:57.984379 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984390 | orchestrator | 2026-02-15 06:10:57.984401 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:10:57.984411 | orchestrator | Sunday 15 February 2026 06:10:53 +0000 (0:00:00.791) 0:17:32.046 ******* 2026-02-15 06:10:57.984422 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984433 | orchestrator | 2026-02-15 06:10:57.984443 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:10:57.984454 | orchestrator | Sunday 15 February 2026 06:10:54 +0000 (0:00:00.801) 0:17:32.847 ******* 2026-02-15 06:10:57.984464 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984475 | orchestrator | 2026-02-15 06:10:57.984485 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:10:57.984496 | orchestrator | Sunday 15 February 2026 06:10:55 +0000 (0:00:00.798) 0:17:33.645 ******* 2026-02-15 06:10:57.984506 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984517 | orchestrator | 2026-02-15 06:10:57.984527 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:10:57.984543 | orchestrator | Sunday 15 February 2026 06:10:56 +0000 (0:00:00.787) 0:17:34.433 ******* 2026-02-15 06:10:57.984554 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984565 | orchestrator | 2026-02-15 06:10:57.984576 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:10:57.984586 | orchestrator | Sunday 15 February 2026 06:10:57 +0000 (0:00:00.868) 0:17:35.302 ******* 2026-02-15 06:10:57.984597 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:10:57.984607 | orchestrator | 2026-02-15 06:10:57.984618 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:10:57.984642 | orchestrator | Sunday 15 February 2026 06:10:57 +0000 (0:00:00.773) 0:17:36.075 ******* 2026-02-15 06:11:46.343200 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.343348 | orchestrator | 2026-02-15 06:11:46.343369 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:11:46.343383 | orchestrator | Sunday 15 February 2026 06:10:58 +0000 (0:00:00.979) 0:17:37.055 ******* 2026-02-15 06:11:46.343395 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.343406 | orchestrator | 2026-02-15 06:11:46.343417 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:11:46.343429 | orchestrator | Sunday 15 February 2026 06:10:59 +0000 (0:00:00.771) 0:17:37.826 ******* 2026-02-15 06:11:46.343441 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.343452 | orchestrator | 2026-02-15 06:11:46.343464 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:11:46.343476 | orchestrator | Sunday 15 February 2026 06:11:00 +0000 (0:00:00.801) 0:17:38.628 ******* 2026-02-15 06:11:46.343490 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.343508 | orchestrator | 2026-02-15 06:11:46.343588 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:11:46.343610 | orchestrator | Sunday 15 February 2026 06:11:01 +0000 (0:00:00.807) 0:17:39.435 ******* 2026-02-15 06:11:46.343628 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.343648 | orchestrator | 2026-02-15 06:11:46.343666 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:11:46.343685 | orchestrator | Sunday 15 February 2026 06:11:02 +0000 (0:00:00.853) 0:17:40.289 ******* 2026-02-15 06:11:46.343704 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.343723 | orchestrator | 2026-02-15 06:11:46.343741 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:11:46.343754 | orchestrator | Sunday 15 February 2026 06:11:02 +0000 (0:00:00.789) 0:17:41.078 ******* 2026-02-15 06:11:46.343767 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.343779 | orchestrator | 2026-02-15 06:11:46.343791 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:11:46.343804 | orchestrator | Sunday 15 February 2026 06:11:03 +0000 (0:00:00.791) 0:17:41.870 ******* 2026-02-15 06:11:46.343817 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-15 06:11:46.343830 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-15 06:11:46.343842 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-15 06:11:46.343854 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.343866 | orchestrator | 2026-02-15 06:11:46.343879 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:11:46.343891 | orchestrator | Sunday 15 February 2026 06:11:04 +0000 (0:00:01.111) 0:17:42.981 ******* 2026-02-15 06:11:46.343903 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-15 06:11:46.343915 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-15 06:11:46.343928 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-15 06:11:46.343940 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.343952 | orchestrator | 2026-02-15 06:11:46.343963 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:11:46.343974 | orchestrator | Sunday 15 February 2026 06:11:05 +0000 (0:00:01.085) 0:17:44.067 ******* 2026-02-15 06:11:46.343984 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-15 06:11:46.343995 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-15 06:11:46.344006 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-15 06:11:46.344016 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.344027 | orchestrator | 2026-02-15 06:11:46.344038 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:11:46.344075 | orchestrator | Sunday 15 February 2026 06:11:07 +0000 (0:00:01.050) 0:17:45.117 ******* 2026-02-15 06:11:46.344086 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.344097 | orchestrator | 2026-02-15 06:11:46.344111 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:11:46.344128 | orchestrator | Sunday 15 February 2026 06:11:07 +0000 (0:00:00.774) 0:17:45.892 ******* 2026-02-15 06:11:46.344147 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-15 06:11:46.344165 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.344184 | orchestrator | 2026-02-15 06:11:46.344203 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:11:46.344222 | orchestrator | Sunday 15 February 2026 06:11:08 +0000 (0:00:00.905) 0:17:46.798 ******* 2026-02-15 06:11:46.344241 | orchestrator | changed: [testbed-node-2] 2026-02-15 06:11:46.344260 | orchestrator | 2026-02-15 06:11:46.344278 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:11:46.344292 | orchestrator | Sunday 15 February 2026 06:11:10 +0000 (0:00:01.566) 0:17:48.365 ******* 2026-02-15 06:11:46.344303 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.344314 | orchestrator | 2026-02-15 06:11:46.344324 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-02-15 06:11:46.344335 | orchestrator | Sunday 15 February 2026 06:11:11 +0000 (0:00:00.826) 0:17:49.191 ******* 2026-02-15 06:11:46.344346 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-2 2026-02-15 06:11:46.344357 | orchestrator | 2026-02-15 06:11:46.344383 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-02-15 06:11:46.344395 | orchestrator | Sunday 15 February 2026 06:11:12 +0000 (0:00:01.148) 0:17:50.340 ******* 2026-02-15 06:11:46.344405 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.344416 | orchestrator | 2026-02-15 06:11:46.344426 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-02-15 06:11:46.344437 | orchestrator | Sunday 15 February 2026 06:11:15 +0000 (0:00:03.230) 0:17:53.570 ******* 2026-02-15 06:11:46.344448 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.344462 | orchestrator | 2026-02-15 06:11:46.344480 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-02-15 06:11:46.344548 | orchestrator | Sunday 15 February 2026 06:11:16 +0000 (0:00:01.179) 0:17:54.750 ******* 2026-02-15 06:11:46.344570 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.344587 | orchestrator | 2026-02-15 06:11:46.344599 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-02-15 06:11:46.344610 | orchestrator | Sunday 15 February 2026 06:11:17 +0000 (0:00:01.188) 0:17:55.939 ******* 2026-02-15 06:11:46.344620 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.344630 | orchestrator | 2026-02-15 06:11:46.344641 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-02-15 06:11:46.344651 | orchestrator | Sunday 15 February 2026 06:11:19 +0000 (0:00:01.191) 0:17:57.130 ******* 2026-02-15 06:11:46.344662 | orchestrator | changed: [testbed-node-2] 2026-02-15 06:11:46.344672 | orchestrator | 2026-02-15 06:11:46.344682 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-02-15 06:11:46.344693 | orchestrator | Sunday 15 February 2026 06:11:21 +0000 (0:00:02.077) 0:17:59.208 ******* 2026-02-15 06:11:46.344703 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.344714 | orchestrator | 2026-02-15 06:11:46.344725 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-02-15 06:11:46.344735 | orchestrator | Sunday 15 February 2026 06:11:22 +0000 (0:00:01.598) 0:18:00.806 ******* 2026-02-15 06:11:46.344746 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.344756 | orchestrator | 2026-02-15 06:11:46.344767 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-02-15 06:11:46.344777 | orchestrator | Sunday 15 February 2026 06:11:24 +0000 (0:00:01.530) 0:18:02.337 ******* 2026-02-15 06:11:46.344788 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.344798 | orchestrator | 2026-02-15 06:11:46.344820 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-02-15 06:11:46.344831 | orchestrator | Sunday 15 February 2026 06:11:25 +0000 (0:00:01.497) 0:18:03.834 ******* 2026-02-15 06:11:46.344842 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:11:46.344852 | orchestrator | 2026-02-15 06:11:46.344862 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-02-15 06:11:46.344873 | orchestrator | Sunday 15 February 2026 06:11:27 +0000 (0:00:01.601) 0:18:05.436 ******* 2026-02-15 06:11:46.344883 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:11:46.344894 | orchestrator | 2026-02-15 06:11:46.344904 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-02-15 06:11:46.344915 | orchestrator | Sunday 15 February 2026 06:11:28 +0000 (0:00:01.609) 0:18:07.045 ******* 2026-02-15 06:11:46.344925 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:11:46.344936 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-02-15 06:11:46.344946 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-15 06:11:46.344957 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-02-15 06:11:46.344968 | orchestrator | 2026-02-15 06:11:46.344979 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-02-15 06:11:46.344990 | orchestrator | Sunday 15 February 2026 06:11:33 +0000 (0:00:04.351) 0:18:11.396 ******* 2026-02-15 06:11:46.345000 | orchestrator | changed: [testbed-node-2] 2026-02-15 06:11:46.345010 | orchestrator | 2026-02-15 06:11:46.345021 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-02-15 06:11:46.345035 | orchestrator | Sunday 15 February 2026 06:11:35 +0000 (0:00:02.043) 0:18:13.440 ******* 2026-02-15 06:11:46.345053 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.345071 | orchestrator | 2026-02-15 06:11:46.345089 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-02-15 06:11:46.345108 | orchestrator | Sunday 15 February 2026 06:11:36 +0000 (0:00:01.179) 0:18:14.619 ******* 2026-02-15 06:11:46.345126 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.345143 | orchestrator | 2026-02-15 06:11:46.345154 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-02-15 06:11:46.345164 | orchestrator | Sunday 15 February 2026 06:11:37 +0000 (0:00:01.148) 0:18:15.768 ******* 2026-02-15 06:11:46.345175 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.345185 | orchestrator | 2026-02-15 06:11:46.345196 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-02-15 06:11:46.345206 | orchestrator | Sunday 15 February 2026 06:11:39 +0000 (0:00:01.811) 0:18:17.579 ******* 2026-02-15 06:11:46.345217 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:11:46.345233 | orchestrator | 2026-02-15 06:11:46.345252 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-02-15 06:11:46.345271 | orchestrator | Sunday 15 February 2026 06:11:40 +0000 (0:00:01.459) 0:18:19.039 ******* 2026-02-15 06:11:46.345289 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.345309 | orchestrator | 2026-02-15 06:11:46.345328 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-02-15 06:11:46.345346 | orchestrator | Sunday 15 February 2026 06:11:41 +0000 (0:00:00.767) 0:18:19.806 ******* 2026-02-15 06:11:46.345365 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-2 2026-02-15 06:11:46.345377 | orchestrator | 2026-02-15 06:11:46.345387 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-02-15 06:11:46.345398 | orchestrator | Sunday 15 February 2026 06:11:42 +0000 (0:00:01.113) 0:18:20.920 ******* 2026-02-15 06:11:46.345408 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.345419 | orchestrator | 2026-02-15 06:11:46.345437 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-02-15 06:11:46.345448 | orchestrator | Sunday 15 February 2026 06:11:43 +0000 (0:00:01.130) 0:18:22.050 ******* 2026-02-15 06:11:46.345467 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:11:46.345478 | orchestrator | 2026-02-15 06:11:46.345489 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-02-15 06:11:46.345499 | orchestrator | Sunday 15 February 2026 06:11:45 +0000 (0:00:01.214) 0:18:23.265 ******* 2026-02-15 06:11:46.345510 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-2 2026-02-15 06:11:46.345520 | orchestrator | 2026-02-15 06:11:46.345587 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-02-15 06:11:46.345609 | orchestrator | Sunday 15 February 2026 06:11:46 +0000 (0:00:01.167) 0:18:24.432 ******* 2026-02-15 06:12:55.248332 | orchestrator | changed: [testbed-node-2] 2026-02-15 06:12:55.248452 | orchestrator | 2026-02-15 06:12:55.248469 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-02-15 06:12:55.248482 | orchestrator | Sunday 15 February 2026 06:11:49 +0000 (0:00:02.814) 0:18:27.247 ******* 2026-02-15 06:12:55.248493 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:12:55.248505 | orchestrator | 2026-02-15 06:12:55.248516 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-02-15 06:12:55.248527 | orchestrator | Sunday 15 February 2026 06:11:51 +0000 (0:00:02.103) 0:18:29.351 ******* 2026-02-15 06:12:55.248538 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:12:55.248548 | orchestrator | 2026-02-15 06:12:55.248559 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-02-15 06:12:55.248570 | orchestrator | Sunday 15 February 2026 06:11:53 +0000 (0:00:02.464) 0:18:31.816 ******* 2026-02-15 06:12:55.248581 | orchestrator | changed: [testbed-node-2] 2026-02-15 06:12:55.248592 | orchestrator | 2026-02-15 06:12:55.248603 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-02-15 06:12:55.248614 | orchestrator | Sunday 15 February 2026 06:11:56 +0000 (0:00:02.923) 0:18:34.739 ******* 2026-02-15 06:12:55.248625 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-2 2026-02-15 06:12:55.248637 | orchestrator | 2026-02-15 06:12:55.248648 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-02-15 06:12:55.248659 | orchestrator | Sunday 15 February 2026 06:11:57 +0000 (0:00:01.273) 0:18:36.013 ******* 2026-02-15 06:12:55.248670 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-02-15 06:12:55.248681 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:12:55.248692 | orchestrator | 2026-02-15 06:12:55.248703 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-02-15 06:12:55.248714 | orchestrator | Sunday 15 February 2026 06:12:20 +0000 (0:00:23.043) 0:18:59.057 ******* 2026-02-15 06:12:55.248724 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:12:55.248735 | orchestrator | 2026-02-15 06:12:55.248746 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-02-15 06:12:55.248757 | orchestrator | Sunday 15 February 2026 06:12:23 +0000 (0:00:02.667) 0:19:01.724 ******* 2026-02-15 06:12:55.248767 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:12:55.248778 | orchestrator | 2026-02-15 06:12:55.248789 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-02-15 06:12:55.248801 | orchestrator | Sunday 15 February 2026 06:12:24 +0000 (0:00:00.782) 0:19:02.507 ******* 2026-02-15 06:12:55.248816 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-02-15 06:12:55.248832 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-02-15 06:12:55.248868 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-02-15 06:12:55.248881 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-02-15 06:12:55.248908 | orchestrator | ok: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-02-15 06:12:55.248923 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__3f19c047a1e0790fa73490a0facc46d2fed5a64d'}])  2026-02-15 06:12:55.248938 | orchestrator | 2026-02-15 06:12:55.248969 | orchestrator | TASK [Start ceph mgr] ********************************************************** 2026-02-15 06:12:55.248983 | orchestrator | Sunday 15 February 2026 06:12:33 +0000 (0:00:09.541) 0:19:12.049 ******* 2026-02-15 06:12:55.248996 | orchestrator | changed: [testbed-node-2] 2026-02-15 06:12:55.249007 | orchestrator | 2026-02-15 06:12:55.249018 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:12:55.249029 | orchestrator | Sunday 15 February 2026 06:12:36 +0000 (0:00:02.234) 0:19:14.284 ******* 2026-02-15 06:12:55.249039 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:12:55.249050 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-1) 2026-02-15 06:12:55.249061 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-2) 2026-02-15 06:12:55.249071 | orchestrator | 2026-02-15 06:12:55.249082 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:12:55.249093 | orchestrator | Sunday 15 February 2026 06:12:38 +0000 (0:00:01.952) 0:19:16.237 ******* 2026-02-15 06:12:55.249104 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 06:12:55.249114 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 06:12:55.249125 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 06:12:55.249136 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:12:55.249146 | orchestrator | 2026-02-15 06:12:55.249157 | orchestrator | TASK [Non container | waiting for the monitor to join the quorum...] *********** 2026-02-15 06:12:55.249168 | orchestrator | Sunday 15 February 2026 06:12:39 +0000 (0:00:01.529) 0:19:17.766 ******* 2026-02-15 06:12:55.249179 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:12:55.249189 | orchestrator | 2026-02-15 06:12:55.249200 | orchestrator | TASK [Container | waiting for the containerized monitor to join the quorum...] *** 2026-02-15 06:12:55.249211 | orchestrator | Sunday 15 February 2026 06:12:40 +0000 (0:00:00.794) 0:19:18.561 ******* 2026-02-15 06:12:55.249221 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:12:55.249232 | orchestrator | 2026-02-15 06:12:55.249243 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 06:12:55.249253 | orchestrator | Sunday 15 February 2026 06:12:42 +0000 (0:00:01.925) 0:19:20.486 ******* 2026-02-15 06:12:55.249291 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:12:55.249303 | orchestrator | 2026-02-15 06:12:55.249314 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-15 06:12:55.249325 | orchestrator | Sunday 15 February 2026 06:12:43 +0000 (0:00:00.795) 0:19:21.281 ******* 2026-02-15 06:12:55.249336 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:12:55.249347 | orchestrator | 2026-02-15 06:12:55.249357 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-15 06:12:55.249368 | orchestrator | Sunday 15 February 2026 06:12:43 +0000 (0:00:00.745) 0:19:22.027 ******* 2026-02-15 06:12:55.249379 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:12:55.249389 | orchestrator | 2026-02-15 06:12:55.249400 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-15 06:12:55.249411 | orchestrator | Sunday 15 February 2026 06:12:44 +0000 (0:00:00.780) 0:19:22.807 ******* 2026-02-15 06:12:55.249421 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:12:55.249432 | orchestrator | 2026-02-15 06:12:55.249443 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-15 06:12:55.249453 | orchestrator | Sunday 15 February 2026 06:12:45 +0000 (0:00:00.823) 0:19:23.630 ******* 2026-02-15 06:12:55.249464 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:12:55.249475 | orchestrator | 2026-02-15 06:12:55.249485 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-15 06:12:55.249496 | orchestrator | Sunday 15 February 2026 06:12:46 +0000 (0:00:00.763) 0:19:24.394 ******* 2026-02-15 06:12:55.249507 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:12:55.249517 | orchestrator | 2026-02-15 06:12:55.249528 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-15 06:12:55.249539 | orchestrator | Sunday 15 February 2026 06:12:47 +0000 (0:00:00.826) 0:19:25.220 ******* 2026-02-15 06:12:55.249549 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:12:55.249560 | orchestrator | 2026-02-15 06:12:55.249570 | orchestrator | PLAY [Reset mon_host] ********************************************************** 2026-02-15 06:12:55.249581 | orchestrator | 2026-02-15 06:12:55.249591 | orchestrator | TASK [Reset mon_host fact] ***************************************************** 2026-02-15 06:12:55.249602 | orchestrator | Sunday 15 February 2026 06:12:48 +0000 (0:00:01.796) 0:19:27.017 ******* 2026-02-15 06:12:55.249613 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:12:55.249624 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:12:55.249634 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:12:55.249645 | orchestrator | 2026-02-15 06:12:55.249656 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-15 06:12:55.249667 | orchestrator | 2026-02-15 06:12:55.249677 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-15 06:12:55.249688 | orchestrator | Sunday 15 February 2026 06:12:50 +0000 (0:00:01.717) 0:19:28.735 ******* 2026-02-15 06:12:55.249699 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:12:55.249710 | orchestrator | 2026-02-15 06:12:55.249720 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:12:55.249748 | orchestrator | Sunday 15 February 2026 06:12:51 +0000 (0:00:01.113) 0:19:29.849 ******* 2026-02-15 06:12:55.249759 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:12:55.249770 | orchestrator | 2026-02-15 06:12:55.249781 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:12:55.249791 | orchestrator | Sunday 15 February 2026 06:12:52 +0000 (0:00:01.145) 0:19:30.994 ******* 2026-02-15 06:12:55.249802 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:12:55.249812 | orchestrator | 2026-02-15 06:12:55.249823 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:12:55.249834 | orchestrator | Sunday 15 February 2026 06:12:54 +0000 (0:00:01.173) 0:19:32.168 ******* 2026-02-15 06:12:55.249844 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:12:55.249855 | orchestrator | 2026-02-15 06:12:55.249872 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:13:41.455580 | orchestrator | Sunday 15 February 2026 06:12:55 +0000 (0:00:01.171) 0:19:33.340 ******* 2026-02-15 06:13:41.455704 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.455722 | orchestrator | 2026-02-15 06:13:41.455735 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:13:41.455746 | orchestrator | Sunday 15 February 2026 06:12:56 +0000 (0:00:01.177) 0:19:34.518 ******* 2026-02-15 06:13:41.455757 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.455768 | orchestrator | 2026-02-15 06:13:41.455779 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:13:41.455790 | orchestrator | Sunday 15 February 2026 06:12:57 +0000 (0:00:01.139) 0:19:35.658 ******* 2026-02-15 06:13:41.455801 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.455812 | orchestrator | 2026-02-15 06:13:41.455823 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:13:41.455834 | orchestrator | Sunday 15 February 2026 06:12:58 +0000 (0:00:01.148) 0:19:36.806 ******* 2026-02-15 06:13:41.455845 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.455856 | orchestrator | 2026-02-15 06:13:41.455866 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:13:41.455877 | orchestrator | Sunday 15 February 2026 06:12:59 +0000 (0:00:01.212) 0:19:38.019 ******* 2026-02-15 06:13:41.455888 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.455899 | orchestrator | 2026-02-15 06:13:41.455909 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:13:41.455920 | orchestrator | Sunday 15 February 2026 06:13:01 +0000 (0:00:01.146) 0:19:39.165 ******* 2026-02-15 06:13:41.455931 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.455942 | orchestrator | 2026-02-15 06:13:41.455953 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:13:41.455964 | orchestrator | Sunday 15 February 2026 06:13:02 +0000 (0:00:01.119) 0:19:40.285 ******* 2026-02-15 06:13:41.455974 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.455985 | orchestrator | 2026-02-15 06:13:41.455996 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:13:41.456007 | orchestrator | Sunday 15 February 2026 06:13:03 +0000 (0:00:01.155) 0:19:41.440 ******* 2026-02-15 06:13:41.456017 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456028 | orchestrator | 2026-02-15 06:13:41.456039 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:13:41.456050 | orchestrator | Sunday 15 February 2026 06:13:04 +0000 (0:00:01.128) 0:19:42.569 ******* 2026-02-15 06:13:41.456061 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456072 | orchestrator | 2026-02-15 06:13:41.456083 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:13:41.456094 | orchestrator | Sunday 15 February 2026 06:13:05 +0000 (0:00:01.167) 0:19:43.737 ******* 2026-02-15 06:13:41.456105 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456116 | orchestrator | 2026-02-15 06:13:41.456209 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:13:41.456224 | orchestrator | Sunday 15 February 2026 06:13:06 +0000 (0:00:01.131) 0:19:44.868 ******* 2026-02-15 06:13:41.456236 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456249 | orchestrator | 2026-02-15 06:13:41.456262 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:13:41.456274 | orchestrator | Sunday 15 February 2026 06:13:07 +0000 (0:00:01.106) 0:19:45.975 ******* 2026-02-15 06:13:41.456286 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456298 | orchestrator | 2026-02-15 06:13:41.456311 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:13:41.456323 | orchestrator | Sunday 15 February 2026 06:13:09 +0000 (0:00:01.140) 0:19:47.115 ******* 2026-02-15 06:13:41.456335 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456347 | orchestrator | 2026-02-15 06:13:41.456402 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:13:41.456427 | orchestrator | Sunday 15 February 2026 06:13:10 +0000 (0:00:01.191) 0:19:48.307 ******* 2026-02-15 06:13:41.456439 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456451 | orchestrator | 2026-02-15 06:13:41.456464 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:13:41.456476 | orchestrator | Sunday 15 February 2026 06:13:11 +0000 (0:00:01.150) 0:19:49.458 ******* 2026-02-15 06:13:41.456489 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456499 | orchestrator | 2026-02-15 06:13:41.456510 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:13:41.456522 | orchestrator | Sunday 15 February 2026 06:13:12 +0000 (0:00:01.158) 0:19:50.616 ******* 2026-02-15 06:13:41.456532 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456543 | orchestrator | 2026-02-15 06:13:41.456554 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:13:41.456564 | orchestrator | Sunday 15 February 2026 06:13:13 +0000 (0:00:01.204) 0:19:51.821 ******* 2026-02-15 06:13:41.456575 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456586 | orchestrator | 2026-02-15 06:13:41.456597 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:13:41.456608 | orchestrator | Sunday 15 February 2026 06:13:14 +0000 (0:00:01.230) 0:19:53.051 ******* 2026-02-15 06:13:41.456634 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456646 | orchestrator | 2026-02-15 06:13:41.456657 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:13:41.456668 | orchestrator | Sunday 15 February 2026 06:13:16 +0000 (0:00:01.153) 0:19:54.205 ******* 2026-02-15 06:13:41.456679 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456689 | orchestrator | 2026-02-15 06:13:41.456700 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:13:41.456711 | orchestrator | Sunday 15 February 2026 06:13:17 +0000 (0:00:01.172) 0:19:55.377 ******* 2026-02-15 06:13:41.456721 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456732 | orchestrator | 2026-02-15 06:13:41.456743 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:13:41.456772 | orchestrator | Sunday 15 February 2026 06:13:18 +0000 (0:00:01.150) 0:19:56.528 ******* 2026-02-15 06:13:41.456784 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456795 | orchestrator | 2026-02-15 06:13:41.456805 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:13:41.456816 | orchestrator | Sunday 15 February 2026 06:13:19 +0000 (0:00:01.141) 0:19:57.670 ******* 2026-02-15 06:13:41.456827 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456837 | orchestrator | 2026-02-15 06:13:41.456848 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:13:41.456858 | orchestrator | Sunday 15 February 2026 06:13:20 +0000 (0:00:01.183) 0:19:58.854 ******* 2026-02-15 06:13:41.456869 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456880 | orchestrator | 2026-02-15 06:13:41.456891 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:13:41.456901 | orchestrator | Sunday 15 February 2026 06:13:21 +0000 (0:00:01.144) 0:19:59.998 ******* 2026-02-15 06:13:41.456912 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456922 | orchestrator | 2026-02-15 06:13:41.456933 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:13:41.456944 | orchestrator | Sunday 15 February 2026 06:13:23 +0000 (0:00:01.185) 0:20:01.183 ******* 2026-02-15 06:13:41.456954 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.456965 | orchestrator | 2026-02-15 06:13:41.456975 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:13:41.456986 | orchestrator | Sunday 15 February 2026 06:13:24 +0000 (0:00:01.138) 0:20:02.322 ******* 2026-02-15 06:13:41.456997 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457015 | orchestrator | 2026-02-15 06:13:41.457026 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:13:41.457037 | orchestrator | Sunday 15 February 2026 06:13:25 +0000 (0:00:01.152) 0:20:03.474 ******* 2026-02-15 06:13:41.457047 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457058 | orchestrator | 2026-02-15 06:13:41.457068 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:13:41.457079 | orchestrator | Sunday 15 February 2026 06:13:26 +0000 (0:00:01.093) 0:20:04.568 ******* 2026-02-15 06:13:41.457089 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457100 | orchestrator | 2026-02-15 06:13:41.457111 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:13:41.457144 | orchestrator | Sunday 15 February 2026 06:13:27 +0000 (0:00:01.196) 0:20:05.764 ******* 2026-02-15 06:13:41.457164 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457183 | orchestrator | 2026-02-15 06:13:41.457201 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:13:41.457217 | orchestrator | Sunday 15 February 2026 06:13:28 +0000 (0:00:01.139) 0:20:06.904 ******* 2026-02-15 06:13:41.457229 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457239 | orchestrator | 2026-02-15 06:13:41.457250 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:13:41.457261 | orchestrator | Sunday 15 February 2026 06:13:29 +0000 (0:00:01.163) 0:20:08.068 ******* 2026-02-15 06:13:41.457271 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457282 | orchestrator | 2026-02-15 06:13:41.457293 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:13:41.457303 | orchestrator | Sunday 15 February 2026 06:13:31 +0000 (0:00:01.176) 0:20:09.244 ******* 2026-02-15 06:13:41.457314 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457324 | orchestrator | 2026-02-15 06:13:41.457335 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:13:41.457346 | orchestrator | Sunday 15 February 2026 06:13:32 +0000 (0:00:01.098) 0:20:10.343 ******* 2026-02-15 06:13:41.457356 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457367 | orchestrator | 2026-02-15 06:13:41.457377 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:13:41.457387 | orchestrator | Sunday 15 February 2026 06:13:33 +0000 (0:00:01.132) 0:20:11.476 ******* 2026-02-15 06:13:41.457398 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457408 | orchestrator | 2026-02-15 06:13:41.457419 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:13:41.457430 | orchestrator | Sunday 15 February 2026 06:13:34 +0000 (0:00:01.158) 0:20:12.635 ******* 2026-02-15 06:13:41.457442 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457460 | orchestrator | 2026-02-15 06:13:41.457472 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:13:41.457484 | orchestrator | Sunday 15 February 2026 06:13:35 +0000 (0:00:01.187) 0:20:13.822 ******* 2026-02-15 06:13:41.457494 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457505 | orchestrator | 2026-02-15 06:13:41.457516 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:13:41.457526 | orchestrator | Sunday 15 February 2026 06:13:36 +0000 (0:00:01.138) 0:20:14.961 ******* 2026-02-15 06:13:41.457537 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457547 | orchestrator | 2026-02-15 06:13:41.457558 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:13:41.457569 | orchestrator | Sunday 15 February 2026 06:13:38 +0000 (0:00:01.144) 0:20:16.106 ******* 2026-02-15 06:13:41.457586 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457597 | orchestrator | 2026-02-15 06:13:41.457608 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:13:41.457618 | orchestrator | Sunday 15 February 2026 06:13:39 +0000 (0:00:01.157) 0:20:17.263 ******* 2026-02-15 06:13:41.457637 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457647 | orchestrator | 2026-02-15 06:13:41.457658 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:13:41.457668 | orchestrator | Sunday 15 February 2026 06:13:40 +0000 (0:00:01.180) 0:20:18.444 ******* 2026-02-15 06:13:41.457679 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:13:41.457690 | orchestrator | 2026-02-15 06:13:41.457701 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:13:41.457719 | orchestrator | Sunday 15 February 2026 06:13:41 +0000 (0:00:01.102) 0:20:19.546 ******* 2026-02-15 06:14:20.123408 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.123542 | orchestrator | 2026-02-15 06:14:20.123561 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:14:20.123633 | orchestrator | Sunday 15 February 2026 06:13:42 +0000 (0:00:01.151) 0:20:20.697 ******* 2026-02-15 06:14:20.123648 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.123659 | orchestrator | 2026-02-15 06:14:20.123671 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:14:20.123682 | orchestrator | Sunday 15 February 2026 06:13:43 +0000 (0:00:01.255) 0:20:21.953 ******* 2026-02-15 06:14:20.123693 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.123704 | orchestrator | 2026-02-15 06:14:20.123714 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:14:20.123726 | orchestrator | Sunday 15 February 2026 06:13:45 +0000 (0:00:01.186) 0:20:23.140 ******* 2026-02-15 06:14:20.123736 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.123747 | orchestrator | 2026-02-15 06:14:20.123758 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:14:20.123769 | orchestrator | Sunday 15 February 2026 06:13:46 +0000 (0:00:01.219) 0:20:24.359 ******* 2026-02-15 06:14:20.123780 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.123791 | orchestrator | 2026-02-15 06:14:20.123801 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:14:20.123812 | orchestrator | Sunday 15 February 2026 06:13:47 +0000 (0:00:01.224) 0:20:25.584 ******* 2026-02-15 06:14:20.123823 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.123834 | orchestrator | 2026-02-15 06:14:20.123845 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:14:20.123858 | orchestrator | Sunday 15 February 2026 06:13:48 +0000 (0:00:01.164) 0:20:26.749 ******* 2026-02-15 06:14:20.123870 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.123889 | orchestrator | 2026-02-15 06:14:20.123909 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:14:20.123928 | orchestrator | Sunday 15 February 2026 06:13:49 +0000 (0:00:01.163) 0:20:27.913 ******* 2026-02-15 06:14:20.123946 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.123968 | orchestrator | 2026-02-15 06:14:20.123988 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:14:20.124033 | orchestrator | Sunday 15 February 2026 06:13:51 +0000 (0:00:01.220) 0:20:29.133 ******* 2026-02-15 06:14:20.124052 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124070 | orchestrator | 2026-02-15 06:14:20.124089 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:14:20.124110 | orchestrator | Sunday 15 February 2026 06:13:52 +0000 (0:00:01.281) 0:20:30.415 ******* 2026-02-15 06:14:20.124129 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124147 | orchestrator | 2026-02-15 06:14:20.124161 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:14:20.124174 | orchestrator | Sunday 15 February 2026 06:13:53 +0000 (0:00:01.176) 0:20:31.591 ******* 2026-02-15 06:14:20.124187 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 06:14:20.124201 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-15 06:14:20.124240 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 06:14:20.124253 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124265 | orchestrator | 2026-02-15 06:14:20.124277 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:14:20.124290 | orchestrator | Sunday 15 February 2026 06:13:54 +0000 (0:00:01.413) 0:20:33.005 ******* 2026-02-15 06:14:20.124302 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 06:14:20.124313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-15 06:14:20.124324 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 06:14:20.124334 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124345 | orchestrator | 2026-02-15 06:14:20.124356 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:14:20.124366 | orchestrator | Sunday 15 February 2026 06:13:56 +0000 (0:00:01.740) 0:20:34.746 ******* 2026-02-15 06:14:20.124377 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 06:14:20.124388 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-15 06:14:20.124398 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 06:14:20.124408 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124419 | orchestrator | 2026-02-15 06:14:20.124430 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:14:20.124440 | orchestrator | Sunday 15 February 2026 06:13:58 +0000 (0:00:01.742) 0:20:36.488 ******* 2026-02-15 06:14:20.124451 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124462 | orchestrator | 2026-02-15 06:14:20.124472 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:14:20.124483 | orchestrator | Sunday 15 February 2026 06:13:59 +0000 (0:00:01.202) 0:20:37.691 ******* 2026-02-15 06:14:20.124510 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-15 06:14:20.124521 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124593 | orchestrator | 2026-02-15 06:14:20.124606 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:14:20.124617 | orchestrator | Sunday 15 February 2026 06:14:00 +0000 (0:00:01.271) 0:20:38.963 ******* 2026-02-15 06:14:20.124628 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124638 | orchestrator | 2026-02-15 06:14:20.124649 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:14:20.124661 | orchestrator | Sunday 15 February 2026 06:14:02 +0000 (0:00:01.142) 0:20:40.105 ******* 2026-02-15 06:14:20.124671 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 06:14:20.124682 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 06:14:20.124693 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 06:14:20.124726 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124738 | orchestrator | 2026-02-15 06:14:20.124748 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-15 06:14:20.124759 | orchestrator | Sunday 15 February 2026 06:14:03 +0000 (0:00:01.434) 0:20:41.540 ******* 2026-02-15 06:14:20.124770 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124780 | orchestrator | 2026-02-15 06:14:20.124791 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-15 06:14:20.124802 | orchestrator | Sunday 15 February 2026 06:14:04 +0000 (0:00:01.158) 0:20:42.699 ******* 2026-02-15 06:14:20.124812 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124823 | orchestrator | 2026-02-15 06:14:20.124834 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-15 06:14:20.124844 | orchestrator | Sunday 15 February 2026 06:14:05 +0000 (0:00:01.201) 0:20:43.900 ******* 2026-02-15 06:14:20.124855 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124865 | orchestrator | 2026-02-15 06:14:20.124876 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-15 06:14:20.124897 | orchestrator | Sunday 15 February 2026 06:14:06 +0000 (0:00:01.135) 0:20:45.036 ******* 2026-02-15 06:14:20.124907 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:14:20.124918 | orchestrator | 2026-02-15 06:14:20.124929 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-15 06:14:20.124939 | orchestrator | 2026-02-15 06:14:20.124950 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-15 06:14:20.124960 | orchestrator | Sunday 15 February 2026 06:14:07 +0000 (0:00:01.049) 0:20:46.085 ******* 2026-02-15 06:14:20.124971 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.124981 | orchestrator | 2026-02-15 06:14:20.124992 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:14:20.125038 | orchestrator | Sunday 15 February 2026 06:14:08 +0000 (0:00:00.776) 0:20:46.861 ******* 2026-02-15 06:14:20.125052 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125063 | orchestrator | 2026-02-15 06:14:20.125074 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:14:20.125085 | orchestrator | Sunday 15 February 2026 06:14:09 +0000 (0:00:00.894) 0:20:47.756 ******* 2026-02-15 06:14:20.125100 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125118 | orchestrator | 2026-02-15 06:14:20.125136 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:14:20.125154 | orchestrator | Sunday 15 February 2026 06:14:10 +0000 (0:00:00.799) 0:20:48.555 ******* 2026-02-15 06:14:20.125174 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125192 | orchestrator | 2026-02-15 06:14:20.125211 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:14:20.125223 | orchestrator | Sunday 15 February 2026 06:14:11 +0000 (0:00:00.766) 0:20:49.322 ******* 2026-02-15 06:14:20.125233 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125244 | orchestrator | 2026-02-15 06:14:20.125254 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:14:20.125265 | orchestrator | Sunday 15 February 2026 06:14:11 +0000 (0:00:00.762) 0:20:50.084 ******* 2026-02-15 06:14:20.125276 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125286 | orchestrator | 2026-02-15 06:14:20.125297 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:14:20.125307 | orchestrator | Sunday 15 February 2026 06:14:12 +0000 (0:00:00.808) 0:20:50.892 ******* 2026-02-15 06:14:20.125318 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125328 | orchestrator | 2026-02-15 06:14:20.125339 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:14:20.125350 | orchestrator | Sunday 15 February 2026 06:14:13 +0000 (0:00:00.826) 0:20:51.719 ******* 2026-02-15 06:14:20.125360 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125374 | orchestrator | 2026-02-15 06:14:20.125393 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:14:20.125410 | orchestrator | Sunday 15 February 2026 06:14:14 +0000 (0:00:00.774) 0:20:52.494 ******* 2026-02-15 06:14:20.125428 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125444 | orchestrator | 2026-02-15 06:14:20.125474 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:14:20.125491 | orchestrator | Sunday 15 February 2026 06:14:15 +0000 (0:00:00.803) 0:20:53.297 ******* 2026-02-15 06:14:20.125572 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125591 | orchestrator | 2026-02-15 06:14:20.125606 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:14:20.125621 | orchestrator | Sunday 15 February 2026 06:14:16 +0000 (0:00:00.813) 0:20:54.110 ******* 2026-02-15 06:14:20.125637 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125653 | orchestrator | 2026-02-15 06:14:20.125670 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:14:20.125687 | orchestrator | Sunday 15 February 2026 06:14:16 +0000 (0:00:00.816) 0:20:54.927 ******* 2026-02-15 06:14:20.125705 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125742 | orchestrator | 2026-02-15 06:14:20.125761 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:14:20.125791 | orchestrator | Sunday 15 February 2026 06:14:17 +0000 (0:00:00.829) 0:20:55.756 ******* 2026-02-15 06:14:20.125812 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125830 | orchestrator | 2026-02-15 06:14:20.125849 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:14:20.125860 | orchestrator | Sunday 15 February 2026 06:14:18 +0000 (0:00:00.792) 0:20:56.549 ******* 2026-02-15 06:14:20.125871 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125882 | orchestrator | 2026-02-15 06:14:20.125892 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:14:20.125903 | orchestrator | Sunday 15 February 2026 06:14:19 +0000 (0:00:00.832) 0:20:57.382 ******* 2026-02-15 06:14:20.125914 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:20.125924 | orchestrator | 2026-02-15 06:14:20.125935 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:14:20.125961 | orchestrator | Sunday 15 February 2026 06:14:20 +0000 (0:00:00.831) 0:20:58.214 ******* 2026-02-15 06:14:53.011231 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011348 | orchestrator | 2026-02-15 06:14:53.011364 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:14:53.011377 | orchestrator | Sunday 15 February 2026 06:14:20 +0000 (0:00:00.837) 0:20:59.051 ******* 2026-02-15 06:14:53.011388 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011399 | orchestrator | 2026-02-15 06:14:53.011410 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:14:53.011421 | orchestrator | Sunday 15 February 2026 06:14:21 +0000 (0:00:00.795) 0:20:59.847 ******* 2026-02-15 06:14:53.011432 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011443 | orchestrator | 2026-02-15 06:14:53.011453 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:14:53.011464 | orchestrator | Sunday 15 February 2026 06:14:22 +0000 (0:00:00.821) 0:21:00.669 ******* 2026-02-15 06:14:53.011475 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011486 | orchestrator | 2026-02-15 06:14:53.011496 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:14:53.011508 | orchestrator | Sunday 15 February 2026 06:14:23 +0000 (0:00:00.845) 0:21:01.514 ******* 2026-02-15 06:14:53.011519 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011529 | orchestrator | 2026-02-15 06:14:53.011540 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:14:53.011551 | orchestrator | Sunday 15 February 2026 06:14:24 +0000 (0:00:00.852) 0:21:02.367 ******* 2026-02-15 06:14:53.011561 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011572 | orchestrator | 2026-02-15 06:14:53.011582 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:14:53.011593 | orchestrator | Sunday 15 February 2026 06:14:25 +0000 (0:00:00.787) 0:21:03.155 ******* 2026-02-15 06:14:53.011604 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011614 | orchestrator | 2026-02-15 06:14:53.011625 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:14:53.011636 | orchestrator | Sunday 15 February 2026 06:14:25 +0000 (0:00:00.805) 0:21:03.960 ******* 2026-02-15 06:14:53.011646 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011657 | orchestrator | 2026-02-15 06:14:53.011668 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:14:53.011678 | orchestrator | Sunday 15 February 2026 06:14:26 +0000 (0:00:00.784) 0:21:04.745 ******* 2026-02-15 06:14:53.011689 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011699 | orchestrator | 2026-02-15 06:14:53.011710 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:14:53.011721 | orchestrator | Sunday 15 February 2026 06:14:27 +0000 (0:00:00.804) 0:21:05.549 ******* 2026-02-15 06:14:53.011731 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011766 | orchestrator | 2026-02-15 06:14:53.011779 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:14:53.011792 | orchestrator | Sunday 15 February 2026 06:14:28 +0000 (0:00:00.776) 0:21:06.326 ******* 2026-02-15 06:14:53.011804 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011817 | orchestrator | 2026-02-15 06:14:53.011830 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:14:53.011843 | orchestrator | Sunday 15 February 2026 06:14:29 +0000 (0:00:00.808) 0:21:07.135 ******* 2026-02-15 06:14:53.011855 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011868 | orchestrator | 2026-02-15 06:14:53.011881 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:14:53.011893 | orchestrator | Sunday 15 February 2026 06:14:29 +0000 (0:00:00.777) 0:21:07.913 ******* 2026-02-15 06:14:53.011906 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011938 | orchestrator | 2026-02-15 06:14:53.011952 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:14:53.011965 | orchestrator | Sunday 15 February 2026 06:14:30 +0000 (0:00:00.848) 0:21:08.761 ******* 2026-02-15 06:14:53.011977 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.011989 | orchestrator | 2026-02-15 06:14:53.012002 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:14:53.012015 | orchestrator | Sunday 15 February 2026 06:14:31 +0000 (0:00:00.775) 0:21:09.537 ******* 2026-02-15 06:14:53.012027 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012039 | orchestrator | 2026-02-15 06:14:53.012053 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:14:53.012065 | orchestrator | Sunday 15 February 2026 06:14:32 +0000 (0:00:00.788) 0:21:10.326 ******* 2026-02-15 06:14:53.012078 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012089 | orchestrator | 2026-02-15 06:14:53.012100 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:14:53.012115 | orchestrator | Sunday 15 February 2026 06:14:33 +0000 (0:00:00.824) 0:21:11.150 ******* 2026-02-15 06:14:53.012133 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012152 | orchestrator | 2026-02-15 06:14:53.012172 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:14:53.012191 | orchestrator | Sunday 15 February 2026 06:14:33 +0000 (0:00:00.789) 0:21:11.939 ******* 2026-02-15 06:14:53.012211 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012231 | orchestrator | 2026-02-15 06:14:53.012267 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:14:53.012286 | orchestrator | Sunday 15 February 2026 06:14:34 +0000 (0:00:00.780) 0:21:12.720 ******* 2026-02-15 06:14:53.012306 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012326 | orchestrator | 2026-02-15 06:14:53.012346 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:14:53.012365 | orchestrator | Sunday 15 February 2026 06:14:35 +0000 (0:00:00.814) 0:21:13.534 ******* 2026-02-15 06:14:53.012385 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012403 | orchestrator | 2026-02-15 06:14:53.012422 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:14:53.012434 | orchestrator | Sunday 15 February 2026 06:14:36 +0000 (0:00:00.781) 0:21:14.315 ******* 2026-02-15 06:14:53.012445 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012456 | orchestrator | 2026-02-15 06:14:53.012488 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:14:53.012500 | orchestrator | Sunday 15 February 2026 06:14:37 +0000 (0:00:00.866) 0:21:15.181 ******* 2026-02-15 06:14:53.012511 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012522 | orchestrator | 2026-02-15 06:14:53.012533 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:14:53.012543 | orchestrator | Sunday 15 February 2026 06:14:37 +0000 (0:00:00.816) 0:21:15.998 ******* 2026-02-15 06:14:53.012566 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012577 | orchestrator | 2026-02-15 06:14:53.012588 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:14:53.012598 | orchestrator | Sunday 15 February 2026 06:14:38 +0000 (0:00:00.796) 0:21:16.794 ******* 2026-02-15 06:14:53.012609 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012619 | orchestrator | 2026-02-15 06:14:53.012630 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:14:53.012641 | orchestrator | Sunday 15 February 2026 06:14:39 +0000 (0:00:00.802) 0:21:17.597 ******* 2026-02-15 06:14:53.012652 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012663 | orchestrator | 2026-02-15 06:14:53.012673 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:14:53.012684 | orchestrator | Sunday 15 February 2026 06:14:40 +0000 (0:00:00.784) 0:21:18.382 ******* 2026-02-15 06:14:53.012694 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012705 | orchestrator | 2026-02-15 06:14:53.012715 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:14:53.012726 | orchestrator | Sunday 15 February 2026 06:14:41 +0000 (0:00:00.791) 0:21:19.173 ******* 2026-02-15 06:14:53.012737 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012747 | orchestrator | 2026-02-15 06:14:53.012758 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:14:53.012768 | orchestrator | Sunday 15 February 2026 06:14:41 +0000 (0:00:00.784) 0:21:19.957 ******* 2026-02-15 06:14:53.012779 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012789 | orchestrator | 2026-02-15 06:14:53.012800 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:14:53.012811 | orchestrator | Sunday 15 February 2026 06:14:42 +0000 (0:00:00.798) 0:21:20.756 ******* 2026-02-15 06:14:53.012821 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012832 | orchestrator | 2026-02-15 06:14:53.012842 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:14:53.012853 | orchestrator | Sunday 15 February 2026 06:14:43 +0000 (0:00:00.800) 0:21:21.557 ******* 2026-02-15 06:14:53.012864 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012874 | orchestrator | 2026-02-15 06:14:53.012885 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:14:53.012896 | orchestrator | Sunday 15 February 2026 06:14:44 +0000 (0:00:00.802) 0:21:22.359 ******* 2026-02-15 06:14:53.012906 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012953 | orchestrator | 2026-02-15 06:14:53.012964 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:14:53.012975 | orchestrator | Sunday 15 February 2026 06:14:45 +0000 (0:00:00.876) 0:21:23.236 ******* 2026-02-15 06:14:53.012986 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.012996 | orchestrator | 2026-02-15 06:14:53.013007 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:14:53.013017 | orchestrator | Sunday 15 February 2026 06:14:45 +0000 (0:00:00.788) 0:21:24.025 ******* 2026-02-15 06:14:53.013028 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.013038 | orchestrator | 2026-02-15 06:14:53.013049 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:14:53.013060 | orchestrator | Sunday 15 February 2026 06:14:46 +0000 (0:00:00.887) 0:21:24.912 ******* 2026-02-15 06:14:53.013070 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.013081 | orchestrator | 2026-02-15 06:14:53.013091 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:14:53.013102 | orchestrator | Sunday 15 February 2026 06:14:47 +0000 (0:00:00.827) 0:21:25.740 ******* 2026-02-15 06:14:53.013113 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.013124 | orchestrator | 2026-02-15 06:14:53.013134 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:14:53.013154 | orchestrator | Sunday 15 February 2026 06:14:48 +0000 (0:00:00.788) 0:21:26.529 ******* 2026-02-15 06:14:53.013165 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.013175 | orchestrator | 2026-02-15 06:14:53.013186 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:14:53.013196 | orchestrator | Sunday 15 February 2026 06:14:49 +0000 (0:00:00.847) 0:21:27.376 ******* 2026-02-15 06:14:53.013207 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.013218 | orchestrator | 2026-02-15 06:14:53.013228 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:14:53.013245 | orchestrator | Sunday 15 February 2026 06:14:50 +0000 (0:00:00.946) 0:21:28.323 ******* 2026-02-15 06:14:53.013256 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.013267 | orchestrator | 2026-02-15 06:14:53.013277 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:14:53.013288 | orchestrator | Sunday 15 February 2026 06:14:51 +0000 (0:00:00.891) 0:21:29.214 ******* 2026-02-15 06:14:53.013299 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:14:53.013309 | orchestrator | 2026-02-15 06:14:53.013319 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:14:53.013330 | orchestrator | Sunday 15 February 2026 06:14:51 +0000 (0:00:00.809) 0:21:30.024 ******* 2026-02-15 06:14:53.013341 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-15 06:14:53.013352 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-15 06:14:53.013369 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-15 06:15:24.755480 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755575 | orchestrator | 2026-02-15 06:15:24.755583 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:15:24.755588 | orchestrator | Sunday 15 February 2026 06:14:52 +0000 (0:00:01.071) 0:21:31.095 ******* 2026-02-15 06:15:24.755593 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-15 06:15:24.755597 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-15 06:15:24.755601 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-15 06:15:24.755606 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755610 | orchestrator | 2026-02-15 06:15:24.755614 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:15:24.755618 | orchestrator | Sunday 15 February 2026 06:14:54 +0000 (0:00:01.155) 0:21:32.251 ******* 2026-02-15 06:15:24.755622 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-15 06:15:24.755626 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-15 06:15:24.755630 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-15 06:15:24.755633 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755637 | orchestrator | 2026-02-15 06:15:24.755641 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:15:24.755645 | orchestrator | Sunday 15 February 2026 06:14:55 +0000 (0:00:01.107) 0:21:33.359 ******* 2026-02-15 06:15:24.755648 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755652 | orchestrator | 2026-02-15 06:15:24.755656 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:15:24.755660 | orchestrator | Sunday 15 February 2026 06:14:56 +0000 (0:00:00.824) 0:21:34.183 ******* 2026-02-15 06:15:24.755664 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-15 06:15:24.755668 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755672 | orchestrator | 2026-02-15 06:15:24.755675 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:15:24.755679 | orchestrator | Sunday 15 February 2026 06:14:56 +0000 (0:00:00.913) 0:21:35.097 ******* 2026-02-15 06:15:24.755683 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755687 | orchestrator | 2026-02-15 06:15:24.755690 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:15:24.755712 | orchestrator | Sunday 15 February 2026 06:14:57 +0000 (0:00:00.791) 0:21:35.888 ******* 2026-02-15 06:15:24.755716 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 06:15:24.755719 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 06:15:24.755723 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 06:15:24.755727 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755731 | orchestrator | 2026-02-15 06:15:24.755734 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-15 06:15:24.755738 | orchestrator | Sunday 15 February 2026 06:14:59 +0000 (0:00:01.429) 0:21:37.318 ******* 2026-02-15 06:15:24.755742 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755745 | orchestrator | 2026-02-15 06:15:24.755749 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-15 06:15:24.755753 | orchestrator | Sunday 15 February 2026 06:15:00 +0000 (0:00:00.786) 0:21:38.105 ******* 2026-02-15 06:15:24.755757 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755760 | orchestrator | 2026-02-15 06:15:24.755764 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-15 06:15:24.755768 | orchestrator | Sunday 15 February 2026 06:15:00 +0000 (0:00:00.832) 0:21:38.938 ******* 2026-02-15 06:15:24.755771 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755775 | orchestrator | 2026-02-15 06:15:24.755779 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-15 06:15:24.755782 | orchestrator | Sunday 15 February 2026 06:15:01 +0000 (0:00:00.782) 0:21:39.721 ******* 2026-02-15 06:15:24.755786 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:15:24.755790 | orchestrator | 2026-02-15 06:15:24.755794 | orchestrator | PLAY [Upgrade ceph mgr nodes when implicitly collocated on monitors] *********** 2026-02-15 06:15:24.755797 | orchestrator | 2026-02-15 06:15:24.755801 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-15 06:15:24.755805 | orchestrator | Sunday 15 February 2026 06:15:02 +0000 (0:00:01.034) 0:21:40.755 ******* 2026-02-15 06:15:24.755808 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.755812 | orchestrator | 2026-02-15 06:15:24.755816 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:15:24.755820 | orchestrator | Sunday 15 February 2026 06:15:03 +0000 (0:00:00.858) 0:21:41.613 ******* 2026-02-15 06:15:24.755872 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.755877 | orchestrator | 2026-02-15 06:15:24.755881 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:15:24.755884 | orchestrator | Sunday 15 February 2026 06:15:04 +0000 (0:00:00.805) 0:21:42.419 ******* 2026-02-15 06:15:24.755888 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.755892 | orchestrator | 2026-02-15 06:15:24.755895 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:15:24.755910 | orchestrator | Sunday 15 February 2026 06:15:05 +0000 (0:00:00.792) 0:21:43.211 ******* 2026-02-15 06:15:24.755914 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.755917 | orchestrator | 2026-02-15 06:15:24.755921 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:15:24.755925 | orchestrator | Sunday 15 February 2026 06:15:05 +0000 (0:00:00.822) 0:21:44.033 ******* 2026-02-15 06:15:24.755929 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.755932 | orchestrator | 2026-02-15 06:15:24.755936 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:15:24.755940 | orchestrator | Sunday 15 February 2026 06:15:06 +0000 (0:00:00.776) 0:21:44.810 ******* 2026-02-15 06:15:24.755943 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.755947 | orchestrator | 2026-02-15 06:15:24.755951 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:15:24.755965 | orchestrator | Sunday 15 February 2026 06:15:07 +0000 (0:00:00.836) 0:21:45.646 ******* 2026-02-15 06:15:24.755974 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.755977 | orchestrator | 2026-02-15 06:15:24.755981 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:15:24.755985 | orchestrator | Sunday 15 February 2026 06:15:08 +0000 (0:00:00.838) 0:21:46.484 ******* 2026-02-15 06:15:24.755988 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.755992 | orchestrator | 2026-02-15 06:15:24.755996 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:15:24.755999 | orchestrator | Sunday 15 February 2026 06:15:09 +0000 (0:00:00.852) 0:21:47.336 ******* 2026-02-15 06:15:24.756003 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756007 | orchestrator | 2026-02-15 06:15:24.756010 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:15:24.756014 | orchestrator | Sunday 15 February 2026 06:15:10 +0000 (0:00:00.860) 0:21:48.197 ******* 2026-02-15 06:15:24.756018 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756022 | orchestrator | 2026-02-15 06:15:24.756025 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:15:24.756029 | orchestrator | Sunday 15 February 2026 06:15:10 +0000 (0:00:00.824) 0:21:49.022 ******* 2026-02-15 06:15:24.756033 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756036 | orchestrator | 2026-02-15 06:15:24.756040 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:15:24.756044 | orchestrator | Sunday 15 February 2026 06:15:11 +0000 (0:00:00.797) 0:21:49.820 ******* 2026-02-15 06:15:24.756049 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756053 | orchestrator | 2026-02-15 06:15:24.756057 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:15:24.756061 | orchestrator | Sunday 15 February 2026 06:15:12 +0000 (0:00:00.775) 0:21:50.595 ******* 2026-02-15 06:15:24.756065 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756070 | orchestrator | 2026-02-15 06:15:24.756074 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:15:24.756078 | orchestrator | Sunday 15 February 2026 06:15:13 +0000 (0:00:00.828) 0:21:51.425 ******* 2026-02-15 06:15:24.756082 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756086 | orchestrator | 2026-02-15 06:15:24.756091 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:15:24.756095 | orchestrator | Sunday 15 February 2026 06:15:14 +0000 (0:00:00.786) 0:21:52.211 ******* 2026-02-15 06:15:24.756099 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756103 | orchestrator | 2026-02-15 06:15:24.756108 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:15:24.756112 | orchestrator | Sunday 15 February 2026 06:15:14 +0000 (0:00:00.814) 0:21:53.026 ******* 2026-02-15 06:15:24.756116 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756121 | orchestrator | 2026-02-15 06:15:24.756125 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:15:24.756129 | orchestrator | Sunday 15 February 2026 06:15:15 +0000 (0:00:00.809) 0:21:53.836 ******* 2026-02-15 06:15:24.756133 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756136 | orchestrator | 2026-02-15 06:15:24.756140 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:15:24.756144 | orchestrator | Sunday 15 February 2026 06:15:16 +0000 (0:00:00.780) 0:21:54.617 ******* 2026-02-15 06:15:24.756147 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756151 | orchestrator | 2026-02-15 06:15:24.756155 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:15:24.756158 | orchestrator | Sunday 15 February 2026 06:15:17 +0000 (0:00:00.793) 0:21:55.410 ******* 2026-02-15 06:15:24.756162 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756166 | orchestrator | 2026-02-15 06:15:24.756169 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:15:24.756173 | orchestrator | Sunday 15 February 2026 06:15:18 +0000 (0:00:00.799) 0:21:56.209 ******* 2026-02-15 06:15:24.756182 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756186 | orchestrator | 2026-02-15 06:15:24.756190 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:15:24.756193 | orchestrator | Sunday 15 February 2026 06:15:18 +0000 (0:00:00.776) 0:21:56.986 ******* 2026-02-15 06:15:24.756197 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756201 | orchestrator | 2026-02-15 06:15:24.756204 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:15:24.756208 | orchestrator | Sunday 15 February 2026 06:15:19 +0000 (0:00:00.826) 0:21:57.812 ******* 2026-02-15 06:15:24.756212 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756215 | orchestrator | 2026-02-15 06:15:24.756219 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:15:24.756223 | orchestrator | Sunday 15 February 2026 06:15:20 +0000 (0:00:00.811) 0:21:58.624 ******* 2026-02-15 06:15:24.756226 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756230 | orchestrator | 2026-02-15 06:15:24.756234 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:15:24.756237 | orchestrator | Sunday 15 February 2026 06:15:21 +0000 (0:00:00.780) 0:21:59.405 ******* 2026-02-15 06:15:24.756244 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756247 | orchestrator | 2026-02-15 06:15:24.756251 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:15:24.756255 | orchestrator | Sunday 15 February 2026 06:15:22 +0000 (0:00:00.891) 0:22:00.297 ******* 2026-02-15 06:15:24.756258 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756262 | orchestrator | 2026-02-15 06:15:24.756266 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:15:24.756269 | orchestrator | Sunday 15 February 2026 06:15:23 +0000 (0:00:00.895) 0:22:01.192 ******* 2026-02-15 06:15:24.756273 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:24.756277 | orchestrator | 2026-02-15 06:15:24.756280 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:15:24.756284 | orchestrator | Sunday 15 February 2026 06:15:23 +0000 (0:00:00.810) 0:22:02.003 ******* 2026-02-15 06:15:24.756292 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137247 | orchestrator | 2026-02-15 06:15:56.137374 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:15:56.137394 | orchestrator | Sunday 15 February 2026 06:15:24 +0000 (0:00:00.845) 0:22:02.848 ******* 2026-02-15 06:15:56.137406 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137419 | orchestrator | 2026-02-15 06:15:56.137430 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:15:56.137442 | orchestrator | Sunday 15 February 2026 06:15:25 +0000 (0:00:00.831) 0:22:03.680 ******* 2026-02-15 06:15:56.137453 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137464 | orchestrator | 2026-02-15 06:15:56.137475 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:15:56.137485 | orchestrator | Sunday 15 February 2026 06:15:26 +0000 (0:00:00.774) 0:22:04.454 ******* 2026-02-15 06:15:56.137496 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137507 | orchestrator | 2026-02-15 06:15:56.137517 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:15:56.137529 | orchestrator | Sunday 15 February 2026 06:15:27 +0000 (0:00:00.784) 0:22:05.238 ******* 2026-02-15 06:15:56.137540 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137551 | orchestrator | 2026-02-15 06:15:56.137562 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:15:56.137573 | orchestrator | Sunday 15 February 2026 06:15:27 +0000 (0:00:00.778) 0:22:06.017 ******* 2026-02-15 06:15:56.137583 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137594 | orchestrator | 2026-02-15 06:15:56.137608 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:15:56.137620 | orchestrator | Sunday 15 February 2026 06:15:28 +0000 (0:00:00.911) 0:22:06.929 ******* 2026-02-15 06:15:56.137659 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137671 | orchestrator | 2026-02-15 06:15:56.137684 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:15:56.137697 | orchestrator | Sunday 15 February 2026 06:15:29 +0000 (0:00:00.799) 0:22:07.728 ******* 2026-02-15 06:15:56.137709 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137721 | orchestrator | 2026-02-15 06:15:56.137734 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:15:56.137776 | orchestrator | Sunday 15 February 2026 06:15:30 +0000 (0:00:00.787) 0:22:08.516 ******* 2026-02-15 06:15:56.137795 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137807 | orchestrator | 2026-02-15 06:15:56.137819 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:15:56.137831 | orchestrator | Sunday 15 February 2026 06:15:31 +0000 (0:00:00.776) 0:22:09.292 ******* 2026-02-15 06:15:56.137843 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137855 | orchestrator | 2026-02-15 06:15:56.137867 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:15:56.137879 | orchestrator | Sunday 15 February 2026 06:15:31 +0000 (0:00:00.787) 0:22:10.080 ******* 2026-02-15 06:15:56.137891 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137904 | orchestrator | 2026-02-15 06:15:56.137915 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:15:56.137928 | orchestrator | Sunday 15 February 2026 06:15:32 +0000 (0:00:00.790) 0:22:10.871 ******* 2026-02-15 06:15:56.137940 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137953 | orchestrator | 2026-02-15 06:15:56.137963 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:15:56.137974 | orchestrator | Sunday 15 February 2026 06:15:33 +0000 (0:00:00.783) 0:22:11.655 ******* 2026-02-15 06:15:56.137985 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.137996 | orchestrator | 2026-02-15 06:15:56.138006 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:15:56.138080 | orchestrator | Sunday 15 February 2026 06:15:34 +0000 (0:00:00.802) 0:22:12.457 ******* 2026-02-15 06:15:56.138093 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138104 | orchestrator | 2026-02-15 06:15:56.138115 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:15:56.138126 | orchestrator | Sunday 15 February 2026 06:15:35 +0000 (0:00:00.811) 0:22:13.268 ******* 2026-02-15 06:15:56.138137 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138149 | orchestrator | 2026-02-15 06:15:56.138168 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:15:56.138186 | orchestrator | Sunday 15 February 2026 06:15:35 +0000 (0:00:00.801) 0:22:14.070 ******* 2026-02-15 06:15:56.138205 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138223 | orchestrator | 2026-02-15 06:15:56.138241 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:15:56.138257 | orchestrator | Sunday 15 February 2026 06:15:36 +0000 (0:00:00.806) 0:22:14.876 ******* 2026-02-15 06:15:56.138274 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138292 | orchestrator | 2026-02-15 06:15:56.138308 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:15:56.138345 | orchestrator | Sunday 15 February 2026 06:15:37 +0000 (0:00:00.788) 0:22:15.665 ******* 2026-02-15 06:15:56.138367 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138385 | orchestrator | 2026-02-15 06:15:56.138404 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:15:56.138419 | orchestrator | Sunday 15 February 2026 06:15:38 +0000 (0:00:00.818) 0:22:16.484 ******* 2026-02-15 06:15:56.138429 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138449 | orchestrator | 2026-02-15 06:15:56.138467 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:15:56.138499 | orchestrator | Sunday 15 February 2026 06:15:39 +0000 (0:00:00.804) 0:22:17.289 ******* 2026-02-15 06:15:56.138518 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138537 | orchestrator | 2026-02-15 06:15:56.138555 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:15:56.138574 | orchestrator | Sunday 15 February 2026 06:15:40 +0000 (0:00:00.907) 0:22:18.197 ******* 2026-02-15 06:15:56.138617 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138629 | orchestrator | 2026-02-15 06:15:56.138640 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:15:56.138651 | orchestrator | Sunday 15 February 2026 06:15:40 +0000 (0:00:00.836) 0:22:19.033 ******* 2026-02-15 06:15:56.138662 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138672 | orchestrator | 2026-02-15 06:15:56.138683 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:15:56.138694 | orchestrator | Sunday 15 February 2026 06:15:41 +0000 (0:00:00.882) 0:22:19.915 ******* 2026-02-15 06:15:56.138704 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138715 | orchestrator | 2026-02-15 06:15:56.138726 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:15:56.138736 | orchestrator | Sunday 15 February 2026 06:15:42 +0000 (0:00:00.781) 0:22:20.697 ******* 2026-02-15 06:15:56.138782 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138794 | orchestrator | 2026-02-15 06:15:56.138805 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:15:56.138817 | orchestrator | Sunday 15 February 2026 06:15:43 +0000 (0:00:00.809) 0:22:21.507 ******* 2026-02-15 06:15:56.138828 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138839 | orchestrator | 2026-02-15 06:15:56.138849 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:15:56.138861 | orchestrator | Sunday 15 February 2026 06:15:44 +0000 (0:00:00.796) 0:22:22.303 ******* 2026-02-15 06:15:56.138871 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138882 | orchestrator | 2026-02-15 06:15:56.138893 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:15:56.138904 | orchestrator | Sunday 15 February 2026 06:15:44 +0000 (0:00:00.777) 0:22:23.081 ******* 2026-02-15 06:15:56.138915 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138925 | orchestrator | 2026-02-15 06:15:56.138936 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:15:56.138947 | orchestrator | Sunday 15 February 2026 06:15:45 +0000 (0:00:00.820) 0:22:23.901 ******* 2026-02-15 06:15:56.138957 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.138968 | orchestrator | 2026-02-15 06:15:56.138979 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:15:56.138990 | orchestrator | Sunday 15 February 2026 06:15:46 +0000 (0:00:00.797) 0:22:24.698 ******* 2026-02-15 06:15:56.139001 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-15 06:15:56.139012 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-15 06:15:56.139022 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-15 06:15:56.139033 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.139044 | orchestrator | 2026-02-15 06:15:56.139055 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:15:56.139066 | orchestrator | Sunday 15 February 2026 06:15:47 +0000 (0:00:01.120) 0:22:25.819 ******* 2026-02-15 06:15:56.139076 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-15 06:15:56.139087 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-15 06:15:56.139098 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-15 06:15:56.139108 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.139119 | orchestrator | 2026-02-15 06:15:56.139130 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:15:56.139150 | orchestrator | Sunday 15 February 2026 06:15:48 +0000 (0:00:01.034) 0:22:26.853 ******* 2026-02-15 06:15:56.139161 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-15 06:15:56.139172 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-15 06:15:56.139183 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-15 06:15:56.139193 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.139204 | orchestrator | 2026-02-15 06:15:56.139215 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:15:56.139226 | orchestrator | Sunday 15 February 2026 06:15:50 +0000 (0:00:01.512) 0:22:28.366 ******* 2026-02-15 06:15:56.139236 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.139247 | orchestrator | 2026-02-15 06:15:56.139258 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:15:56.139268 | orchestrator | Sunday 15 February 2026 06:15:51 +0000 (0:00:00.833) 0:22:29.199 ******* 2026-02-15 06:15:56.139280 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-15 06:15:56.139291 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.139302 | orchestrator | 2026-02-15 06:15:56.139312 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:15:56.139323 | orchestrator | Sunday 15 February 2026 06:15:52 +0000 (0:00:01.411) 0:22:30.611 ******* 2026-02-15 06:15:56.139334 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.139345 | orchestrator | 2026-02-15 06:15:56.139355 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:15:56.139373 | orchestrator | Sunday 15 February 2026 06:15:53 +0000 (0:00:00.842) 0:22:31.454 ******* 2026-02-15 06:15:56.139384 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 06:15:56.139395 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 06:15:56.139405 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 06:15:56.139416 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.139427 | orchestrator | 2026-02-15 06:15:56.139438 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-15 06:15:56.139449 | orchestrator | Sunday 15 February 2026 06:15:54 +0000 (0:00:01.160) 0:22:32.614 ******* 2026-02-15 06:15:56.139459 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:15:56.139470 | orchestrator | 2026-02-15 06:15:56.139481 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-15 06:15:56.139492 | orchestrator | Sunday 15 February 2026 06:15:55 +0000 (0:00:00.816) 0:22:33.431 ******* 2026-02-15 06:15:56.139509 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:16:37.741637 | orchestrator | 2026-02-15 06:16:37.741796 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-15 06:16:37.741813 | orchestrator | Sunday 15 February 2026 06:15:56 +0000 (0:00:00.795) 0:22:34.226 ******* 2026-02-15 06:16:37.741825 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:16:37.741838 | orchestrator | 2026-02-15 06:16:37.741849 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-15 06:16:37.741860 | orchestrator | Sunday 15 February 2026 06:15:56 +0000 (0:00:00.774) 0:22:35.001 ******* 2026-02-15 06:16:37.741871 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:16:37.741882 | orchestrator | 2026-02-15 06:16:37.741893 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-15 06:16:37.741904 | orchestrator | 2026-02-15 06:16:37.741915 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-15 06:16:37.741926 | orchestrator | Sunday 15 February 2026 06:15:58 +0000 (0:00:01.393) 0:22:36.394 ******* 2026-02-15 06:16:37.741936 | orchestrator | changed: [testbed-node-0] 2026-02-15 06:16:37.741947 | orchestrator | 2026-02-15 06:16:37.741958 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-15 06:16:37.741969 | orchestrator | Sunday 15 February 2026 06:16:11 +0000 (0:00:13.144) 0:22:49.539 ******* 2026-02-15 06:16:37.742003 | orchestrator | changed: [testbed-node-0] 2026-02-15 06:16:37.742014 | orchestrator | 2026-02-15 06:16:37.742093 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:16:37.742103 | orchestrator | Sunday 15 February 2026 06:16:13 +0000 (0:00:02.453) 0:22:51.992 ******* 2026-02-15 06:16:37.742114 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-15 06:16:37.742124 | orchestrator | 2026-02-15 06:16:37.742135 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:16:37.742145 | orchestrator | Sunday 15 February 2026 06:16:15 +0000 (0:00:01.295) 0:22:53.288 ******* 2026-02-15 06:16:37.742157 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:37.742168 | orchestrator | 2026-02-15 06:16:37.742178 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:16:37.742189 | orchestrator | Sunday 15 February 2026 06:16:16 +0000 (0:00:01.478) 0:22:54.766 ******* 2026-02-15 06:16:37.742199 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:37.742210 | orchestrator | 2026-02-15 06:16:37.742220 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:16:37.742231 | orchestrator | Sunday 15 February 2026 06:16:17 +0000 (0:00:01.152) 0:22:55.919 ******* 2026-02-15 06:16:37.742241 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:37.742252 | orchestrator | 2026-02-15 06:16:37.742262 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:16:37.742273 | orchestrator | Sunday 15 February 2026 06:16:19 +0000 (0:00:01.495) 0:22:57.416 ******* 2026-02-15 06:16:37.742283 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:37.742294 | orchestrator | 2026-02-15 06:16:37.742304 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:16:37.742315 | orchestrator | Sunday 15 February 2026 06:16:20 +0000 (0:00:01.240) 0:22:58.656 ******* 2026-02-15 06:16:37.742325 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:37.742336 | orchestrator | 2026-02-15 06:16:37.742346 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:16:37.742357 | orchestrator | Sunday 15 February 2026 06:16:21 +0000 (0:00:01.144) 0:22:59.801 ******* 2026-02-15 06:16:37.742367 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:37.742377 | orchestrator | 2026-02-15 06:16:37.742388 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:16:37.742399 | orchestrator | Sunday 15 February 2026 06:16:22 +0000 (0:00:01.190) 0:23:00.992 ******* 2026-02-15 06:16:37.742410 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:37.742420 | orchestrator | 2026-02-15 06:16:37.742431 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:16:37.742448 | orchestrator | Sunday 15 February 2026 06:16:24 +0000 (0:00:01.192) 0:23:02.185 ******* 2026-02-15 06:16:37.742467 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:37.742485 | orchestrator | 2026-02-15 06:16:37.742503 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:16:37.742522 | orchestrator | Sunday 15 February 2026 06:16:25 +0000 (0:00:01.206) 0:23:03.392 ******* 2026-02-15 06:16:37.742541 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:16:37.742561 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:16:37.742573 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:16:37.742584 | orchestrator | 2026-02-15 06:16:37.742594 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:16:37.742605 | orchestrator | Sunday 15 February 2026 06:16:27 +0000 (0:00:02.023) 0:23:05.415 ******* 2026-02-15 06:16:37.742615 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:37.742626 | orchestrator | 2026-02-15 06:16:37.742636 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:16:37.742692 | orchestrator | Sunday 15 February 2026 06:16:28 +0000 (0:00:01.293) 0:23:06.709 ******* 2026-02-15 06:16:37.742728 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:16:37.742746 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:16:37.742757 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:16:37.742768 | orchestrator | 2026-02-15 06:16:37.742778 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:16:37.742789 | orchestrator | Sunday 15 February 2026 06:16:31 +0000 (0:00:03.274) 0:23:09.983 ******* 2026-02-15 06:16:37.742799 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 06:16:37.742810 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 06:16:37.742820 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 06:16:37.742831 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:37.742842 | orchestrator | 2026-02-15 06:16:37.742873 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:16:37.742884 | orchestrator | Sunday 15 February 2026 06:16:33 +0000 (0:00:01.768) 0:23:11.752 ******* 2026-02-15 06:16:37.742897 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:16:37.742911 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:16:37.742922 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:16:37.742933 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:37.742944 | orchestrator | 2026-02-15 06:16:37.742955 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:16:37.742966 | orchestrator | Sunday 15 February 2026 06:16:35 +0000 (0:00:01.662) 0:23:13.415 ******* 2026-02-15 06:16:37.742979 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:37.742993 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:37.743005 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:37.743015 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:37.743026 | orchestrator | 2026-02-15 06:16:37.743037 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:16:37.743047 | orchestrator | Sunday 15 February 2026 06:16:36 +0000 (0:00:01.168) 0:23:14.584 ******* 2026-02-15 06:16:37.743061 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:16:29.488408', 'end': '2026-02-15 06:16:29.543543', 'delta': '0:00:00.055135', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:16:37.743087 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:16:30.081271', 'end': '2026-02-15 06:16:30.130108', 'delta': '0:00:00.048837', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:16:37.743108 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:16:30.657095', 'end': '2026-02-15 06:16:30.708129', 'delta': '0:00:00.051034', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:16:56.731214 | orchestrator | 2026-02-15 06:16:56.731305 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:16:56.731315 | orchestrator | Sunday 15 February 2026 06:16:37 +0000 (0:00:01.248) 0:23:15.832 ******* 2026-02-15 06:16:56.731321 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:56.731328 | orchestrator | 2026-02-15 06:16:56.731334 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:16:56.731340 | orchestrator | Sunday 15 February 2026 06:16:38 +0000 (0:00:01.258) 0:23:17.091 ******* 2026-02-15 06:16:56.731346 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:56.731353 | orchestrator | 2026-02-15 06:16:56.731359 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:16:56.731365 | orchestrator | Sunday 15 February 2026 06:16:40 +0000 (0:00:01.259) 0:23:18.350 ******* 2026-02-15 06:16:56.731371 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:56.731377 | orchestrator | 2026-02-15 06:16:56.731383 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:16:56.731389 | orchestrator | Sunday 15 February 2026 06:16:41 +0000 (0:00:01.183) 0:23:19.533 ******* 2026-02-15 06:16:56.731395 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:56.731400 | orchestrator | 2026-02-15 06:16:56.731406 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:16:56.731412 | orchestrator | Sunday 15 February 2026 06:16:43 +0000 (0:00:02.131) 0:23:21.665 ******* 2026-02-15 06:16:56.731417 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:16:56.731423 | orchestrator | 2026-02-15 06:16:56.731429 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:16:56.731434 | orchestrator | Sunday 15 February 2026 06:16:44 +0000 (0:00:01.168) 0:23:22.834 ******* 2026-02-15 06:16:56.731440 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:56.731464 | orchestrator | 2026-02-15 06:16:56.731470 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:16:56.731476 | orchestrator | Sunday 15 February 2026 06:16:45 +0000 (0:00:01.207) 0:23:24.041 ******* 2026-02-15 06:16:56.731481 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:56.731487 | orchestrator | 2026-02-15 06:16:56.731493 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:16:56.731498 | orchestrator | Sunday 15 February 2026 06:16:47 +0000 (0:00:01.220) 0:23:25.262 ******* 2026-02-15 06:16:56.731504 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:56.731509 | orchestrator | 2026-02-15 06:16:56.731515 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:16:56.731521 | orchestrator | Sunday 15 February 2026 06:16:48 +0000 (0:00:01.210) 0:23:26.472 ******* 2026-02-15 06:16:56.731526 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:56.731532 | orchestrator | 2026-02-15 06:16:56.731538 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:16:56.731543 | orchestrator | Sunday 15 February 2026 06:16:49 +0000 (0:00:01.131) 0:23:27.604 ******* 2026-02-15 06:16:56.731549 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:56.731555 | orchestrator | 2026-02-15 06:16:56.731560 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:16:56.731566 | orchestrator | Sunday 15 February 2026 06:16:50 +0000 (0:00:01.270) 0:23:28.874 ******* 2026-02-15 06:16:56.731571 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:56.731577 | orchestrator | 2026-02-15 06:16:56.731583 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:16:56.731588 | orchestrator | Sunday 15 February 2026 06:16:51 +0000 (0:00:01.169) 0:23:30.043 ******* 2026-02-15 06:16:56.731594 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:56.731647 | orchestrator | 2026-02-15 06:16:56.731659 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:16:56.731670 | orchestrator | Sunday 15 February 2026 06:16:53 +0000 (0:00:01.159) 0:23:31.203 ******* 2026-02-15 06:16:56.731678 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:56.731684 | orchestrator | 2026-02-15 06:16:56.731690 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:16:56.731708 | orchestrator | Sunday 15 February 2026 06:16:54 +0000 (0:00:01.138) 0:23:32.342 ******* 2026-02-15 06:16:56.731714 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:56.731720 | orchestrator | 2026-02-15 06:16:56.731725 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:16:56.731731 | orchestrator | Sunday 15 February 2026 06:16:55 +0000 (0:00:01.145) 0:23:33.488 ******* 2026-02-15 06:16:56.731738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:16:56.731747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:16:56.731766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:16:56.731782 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:16:56.731791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:16:56.731798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:16:56.731805 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:16:56.731823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37951a5f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:16:57.971100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:16:57.971227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:16:57.971253 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:16:57.971268 | orchestrator | 2026-02-15 06:16:57.971281 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:16:57.971293 | orchestrator | Sunday 15 February 2026 06:16:56 +0000 (0:00:01.327) 0:23:34.815 ******* 2026-02-15 06:16:57.971306 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:57.971320 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:57.971350 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:57.971363 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:57.971396 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:57.971427 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:57.971438 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:57.971459 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37951a5f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:16:57.971489 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:17:38.617411 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:17:38.617609 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.617631 | orchestrator | 2026-02-15 06:17:38.617644 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:17:38.617656 | orchestrator | Sunday 15 February 2026 06:16:57 +0000 (0:00:01.245) 0:23:36.061 ******* 2026-02-15 06:17:38.617667 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:17:38.617679 | orchestrator | 2026-02-15 06:17:38.617690 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:17:38.617701 | orchestrator | Sunday 15 February 2026 06:16:59 +0000 (0:00:01.550) 0:23:37.611 ******* 2026-02-15 06:17:38.617711 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:17:38.617722 | orchestrator | 2026-02-15 06:17:38.617733 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:17:38.617744 | orchestrator | Sunday 15 February 2026 06:17:00 +0000 (0:00:01.206) 0:23:38.817 ******* 2026-02-15 06:17:38.617754 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:17:38.617766 | orchestrator | 2026-02-15 06:17:38.617777 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:17:38.617788 | orchestrator | Sunday 15 February 2026 06:17:02 +0000 (0:00:01.489) 0:23:40.307 ******* 2026-02-15 06:17:38.617799 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.617810 | orchestrator | 2026-02-15 06:17:38.617821 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:17:38.617831 | orchestrator | Sunday 15 February 2026 06:17:03 +0000 (0:00:01.168) 0:23:41.475 ******* 2026-02-15 06:17:38.617842 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.617853 | orchestrator | 2026-02-15 06:17:38.617863 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:17:38.617874 | orchestrator | Sunday 15 February 2026 06:17:04 +0000 (0:00:01.251) 0:23:42.727 ******* 2026-02-15 06:17:38.617885 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.617896 | orchestrator | 2026-02-15 06:17:38.617907 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:17:38.617918 | orchestrator | Sunday 15 February 2026 06:17:05 +0000 (0:00:01.160) 0:23:43.888 ******* 2026-02-15 06:17:38.617928 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:17:38.617940 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-15 06:17:38.617953 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-15 06:17:38.617965 | orchestrator | 2026-02-15 06:17:38.617977 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:17:38.618078 | orchestrator | Sunday 15 February 2026 06:17:07 +0000 (0:00:02.072) 0:23:45.961 ******* 2026-02-15 06:17:38.618109 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 06:17:38.618122 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 06:17:38.618144 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 06:17:38.618157 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.618169 | orchestrator | 2026-02-15 06:17:38.618181 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:17:38.618194 | orchestrator | Sunday 15 February 2026 06:17:09 +0000 (0:00:01.191) 0:23:47.152 ******* 2026-02-15 06:17:38.618206 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.618218 | orchestrator | 2026-02-15 06:17:38.618230 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:17:38.618243 | orchestrator | Sunday 15 February 2026 06:17:10 +0000 (0:00:01.123) 0:23:48.276 ******* 2026-02-15 06:17:38.618255 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:17:38.618268 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:17:38.618282 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:17:38.618293 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:17:38.618306 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:17:38.618318 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:17:38.618329 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:17:38.618340 | orchestrator | 2026-02-15 06:17:38.618350 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:17:38.618361 | orchestrator | Sunday 15 February 2026 06:17:12 +0000 (0:00:01.938) 0:23:50.215 ******* 2026-02-15 06:17:38.618371 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:17:38.618382 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:17:38.618393 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:17:38.618403 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:17:38.618433 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:17:38.618445 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:17:38.618455 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:17:38.618466 | orchestrator | 2026-02-15 06:17:38.618477 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:17:38.618487 | orchestrator | Sunday 15 February 2026 06:17:14 +0000 (0:00:02.675) 0:23:52.891 ******* 2026-02-15 06:17:38.618498 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0 2026-02-15 06:17:38.618529 | orchestrator | 2026-02-15 06:17:38.618541 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:17:38.618552 | orchestrator | Sunday 15 February 2026 06:17:15 +0000 (0:00:01.132) 0:23:54.023 ******* 2026-02-15 06:17:38.618562 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0 2026-02-15 06:17:38.618573 | orchestrator | 2026-02-15 06:17:38.618583 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:17:38.618594 | orchestrator | Sunday 15 February 2026 06:17:17 +0000 (0:00:01.144) 0:23:55.168 ******* 2026-02-15 06:17:38.618605 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:17:38.618615 | orchestrator | 2026-02-15 06:17:38.618626 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:17:38.618636 | orchestrator | Sunday 15 February 2026 06:17:18 +0000 (0:00:01.581) 0:23:56.750 ******* 2026-02-15 06:17:38.618657 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.618668 | orchestrator | 2026-02-15 06:17:38.618679 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:17:38.618689 | orchestrator | Sunday 15 February 2026 06:17:19 +0000 (0:00:01.138) 0:23:57.889 ******* 2026-02-15 06:17:38.618700 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.618710 | orchestrator | 2026-02-15 06:17:38.618721 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:17:38.618731 | orchestrator | Sunday 15 February 2026 06:17:21 +0000 (0:00:01.252) 0:23:59.141 ******* 2026-02-15 06:17:38.618742 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.618752 | orchestrator | 2026-02-15 06:17:38.618763 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:17:38.618773 | orchestrator | Sunday 15 February 2026 06:17:22 +0000 (0:00:01.166) 0:24:00.308 ******* 2026-02-15 06:17:38.618784 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:17:38.618795 | orchestrator | 2026-02-15 06:17:38.618805 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:17:38.618816 | orchestrator | Sunday 15 February 2026 06:17:23 +0000 (0:00:01.558) 0:24:01.866 ******* 2026-02-15 06:17:38.618826 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.618837 | orchestrator | 2026-02-15 06:17:38.618847 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:17:38.618858 | orchestrator | Sunday 15 February 2026 06:17:24 +0000 (0:00:01.197) 0:24:03.064 ******* 2026-02-15 06:17:38.618869 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.618880 | orchestrator | 2026-02-15 06:17:38.618890 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:17:38.618901 | orchestrator | Sunday 15 February 2026 06:17:26 +0000 (0:00:01.143) 0:24:04.207 ******* 2026-02-15 06:17:38.618912 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:17:38.618922 | orchestrator | 2026-02-15 06:17:38.618933 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:17:38.618949 | orchestrator | Sunday 15 February 2026 06:17:27 +0000 (0:00:01.600) 0:24:05.808 ******* 2026-02-15 06:17:38.618960 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:17:38.618971 | orchestrator | 2026-02-15 06:17:38.618982 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:17:38.618992 | orchestrator | Sunday 15 February 2026 06:17:29 +0000 (0:00:01.641) 0:24:07.449 ******* 2026-02-15 06:17:38.619003 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.619014 | orchestrator | 2026-02-15 06:17:38.619024 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:17:38.619035 | orchestrator | Sunday 15 February 2026 06:17:30 +0000 (0:00:01.207) 0:24:08.656 ******* 2026-02-15 06:17:38.619045 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:17:38.619056 | orchestrator | 2026-02-15 06:17:38.619067 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:17:38.619077 | orchestrator | Sunday 15 February 2026 06:17:31 +0000 (0:00:01.155) 0:24:09.812 ******* 2026-02-15 06:17:38.619088 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.619098 | orchestrator | 2026-02-15 06:17:38.619109 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:17:38.619120 | orchestrator | Sunday 15 February 2026 06:17:32 +0000 (0:00:01.153) 0:24:10.965 ******* 2026-02-15 06:17:38.619130 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.619141 | orchestrator | 2026-02-15 06:17:38.619151 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:17:38.619162 | orchestrator | Sunday 15 February 2026 06:17:33 +0000 (0:00:01.122) 0:24:12.088 ******* 2026-02-15 06:17:38.619173 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.619183 | orchestrator | 2026-02-15 06:17:38.619194 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:17:38.619204 | orchestrator | Sunday 15 February 2026 06:17:35 +0000 (0:00:01.156) 0:24:13.245 ******* 2026-02-15 06:17:38.619219 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.619230 | orchestrator | 2026-02-15 06:17:38.619241 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:17:38.619251 | orchestrator | Sunday 15 February 2026 06:17:36 +0000 (0:00:01.170) 0:24:14.416 ******* 2026-02-15 06:17:38.619262 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:17:38.619273 | orchestrator | 2026-02-15 06:17:38.619283 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:17:38.619294 | orchestrator | Sunday 15 February 2026 06:17:37 +0000 (0:00:01.155) 0:24:15.572 ******* 2026-02-15 06:17:38.619311 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:18:27.900491 | orchestrator | 2026-02-15 06:18:27.900610 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:18:27.900627 | orchestrator | Sunday 15 February 2026 06:17:38 +0000 (0:00:01.136) 0:24:16.708 ******* 2026-02-15 06:18:27.900639 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:18:27.900652 | orchestrator | 2026-02-15 06:18:27.900663 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:18:27.900674 | orchestrator | Sunday 15 February 2026 06:17:39 +0000 (0:00:01.181) 0:24:17.890 ******* 2026-02-15 06:18:27.900685 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:18:27.900695 | orchestrator | 2026-02-15 06:18:27.900706 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:18:27.900717 | orchestrator | Sunday 15 February 2026 06:17:40 +0000 (0:00:01.164) 0:24:19.054 ******* 2026-02-15 06:18:27.900728 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.900741 | orchestrator | 2026-02-15 06:18:27.900751 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:18:27.900762 | orchestrator | Sunday 15 February 2026 06:17:42 +0000 (0:00:01.119) 0:24:20.174 ******* 2026-02-15 06:18:27.900773 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.900783 | orchestrator | 2026-02-15 06:18:27.900794 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:18:27.900805 | orchestrator | Sunday 15 February 2026 06:17:43 +0000 (0:00:01.159) 0:24:21.333 ******* 2026-02-15 06:18:27.900816 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.900826 | orchestrator | 2026-02-15 06:18:27.900837 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:18:27.900848 | orchestrator | Sunday 15 February 2026 06:17:44 +0000 (0:00:01.122) 0:24:22.456 ******* 2026-02-15 06:18:27.900858 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.900869 | orchestrator | 2026-02-15 06:18:27.900880 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:18:27.900891 | orchestrator | Sunday 15 February 2026 06:17:45 +0000 (0:00:01.134) 0:24:23.591 ******* 2026-02-15 06:18:27.900901 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.900912 | orchestrator | 2026-02-15 06:18:27.900923 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:18:27.900934 | orchestrator | Sunday 15 February 2026 06:17:46 +0000 (0:00:01.140) 0:24:24.732 ******* 2026-02-15 06:18:27.900944 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.900955 | orchestrator | 2026-02-15 06:18:27.900966 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:18:27.900976 | orchestrator | Sunday 15 February 2026 06:17:47 +0000 (0:00:01.126) 0:24:25.859 ******* 2026-02-15 06:18:27.900987 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.900998 | orchestrator | 2026-02-15 06:18:27.901011 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:18:27.901024 | orchestrator | Sunday 15 February 2026 06:17:48 +0000 (0:00:01.126) 0:24:26.985 ******* 2026-02-15 06:18:27.901037 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901049 | orchestrator | 2026-02-15 06:18:27.901062 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:18:27.901099 | orchestrator | Sunday 15 February 2026 06:17:50 +0000 (0:00:01.131) 0:24:28.116 ******* 2026-02-15 06:18:27.901112 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901124 | orchestrator | 2026-02-15 06:18:27.901137 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:18:27.901150 | orchestrator | Sunday 15 February 2026 06:17:51 +0000 (0:00:01.176) 0:24:29.293 ******* 2026-02-15 06:18:27.901177 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901190 | orchestrator | 2026-02-15 06:18:27.901202 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:18:27.901215 | orchestrator | Sunday 15 February 2026 06:17:52 +0000 (0:00:01.136) 0:24:30.430 ******* 2026-02-15 06:18:27.901227 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901239 | orchestrator | 2026-02-15 06:18:27.901251 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:18:27.901263 | orchestrator | Sunday 15 February 2026 06:17:53 +0000 (0:00:01.200) 0:24:31.631 ******* 2026-02-15 06:18:27.901275 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901288 | orchestrator | 2026-02-15 06:18:27.901300 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:18:27.901312 | orchestrator | Sunday 15 February 2026 06:17:54 +0000 (0:00:01.121) 0:24:32.752 ******* 2026-02-15 06:18:27.901324 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:18:27.901336 | orchestrator | 2026-02-15 06:18:27.901349 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:18:27.901361 | orchestrator | Sunday 15 February 2026 06:17:56 +0000 (0:00:01.964) 0:24:34.717 ******* 2026-02-15 06:18:27.901372 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:18:27.901382 | orchestrator | 2026-02-15 06:18:27.901393 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:18:27.901404 | orchestrator | Sunday 15 February 2026 06:17:59 +0000 (0:00:02.488) 0:24:37.206 ******* 2026-02-15 06:18:27.901435 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0 2026-02-15 06:18:27.901447 | orchestrator | 2026-02-15 06:18:27.901458 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:18:27.901468 | orchestrator | Sunday 15 February 2026 06:18:00 +0000 (0:00:01.130) 0:24:38.337 ******* 2026-02-15 06:18:27.901479 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901490 | orchestrator | 2026-02-15 06:18:27.901500 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:18:27.901511 | orchestrator | Sunday 15 February 2026 06:18:01 +0000 (0:00:01.160) 0:24:39.497 ******* 2026-02-15 06:18:27.901521 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901532 | orchestrator | 2026-02-15 06:18:27.901542 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:18:27.901553 | orchestrator | Sunday 15 February 2026 06:18:02 +0000 (0:00:01.134) 0:24:40.632 ******* 2026-02-15 06:18:27.901581 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:18:27.901593 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:18:27.901604 | orchestrator | 2026-02-15 06:18:27.901614 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:18:27.901625 | orchestrator | Sunday 15 February 2026 06:18:04 +0000 (0:00:01.899) 0:24:42.531 ******* 2026-02-15 06:18:27.901635 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:18:27.901646 | orchestrator | 2026-02-15 06:18:27.901656 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:18:27.901667 | orchestrator | Sunday 15 February 2026 06:18:05 +0000 (0:00:01.472) 0:24:44.004 ******* 2026-02-15 06:18:27.901678 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901688 | orchestrator | 2026-02-15 06:18:27.901699 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:18:27.901709 | orchestrator | Sunday 15 February 2026 06:18:07 +0000 (0:00:01.169) 0:24:45.173 ******* 2026-02-15 06:18:27.901728 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901739 | orchestrator | 2026-02-15 06:18:27.901750 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:18:27.901761 | orchestrator | Sunday 15 February 2026 06:18:08 +0000 (0:00:01.222) 0:24:46.396 ******* 2026-02-15 06:18:27.901771 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901782 | orchestrator | 2026-02-15 06:18:27.901793 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:18:27.901803 | orchestrator | Sunday 15 February 2026 06:18:09 +0000 (0:00:01.137) 0:24:47.533 ******* 2026-02-15 06:18:27.901818 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0 2026-02-15 06:18:27.901837 | orchestrator | 2026-02-15 06:18:27.901856 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:18:27.901875 | orchestrator | Sunday 15 February 2026 06:18:10 +0000 (0:00:01.132) 0:24:48.666 ******* 2026-02-15 06:18:27.901890 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:18:27.901910 | orchestrator | 2026-02-15 06:18:27.901926 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:18:27.901937 | orchestrator | Sunday 15 February 2026 06:18:12 +0000 (0:00:01.792) 0:24:50.458 ******* 2026-02-15 06:18:27.901947 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:18:27.901958 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:18:27.901968 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:18:27.901979 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.901989 | orchestrator | 2026-02-15 06:18:27.902000 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:18:27.902011 | orchestrator | Sunday 15 February 2026 06:18:13 +0000 (0:00:01.146) 0:24:51.605 ******* 2026-02-15 06:18:27.902084 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.902096 | orchestrator | 2026-02-15 06:18:27.902107 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:18:27.902118 | orchestrator | Sunday 15 February 2026 06:18:14 +0000 (0:00:01.144) 0:24:52.749 ******* 2026-02-15 06:18:27.902138 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.902149 | orchestrator | 2026-02-15 06:18:27.902160 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:18:27.902177 | orchestrator | Sunday 15 February 2026 06:18:15 +0000 (0:00:01.194) 0:24:53.943 ******* 2026-02-15 06:18:27.902188 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.902199 | orchestrator | 2026-02-15 06:18:27.902210 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:18:27.902220 | orchestrator | Sunday 15 February 2026 06:18:16 +0000 (0:00:01.135) 0:24:55.079 ******* 2026-02-15 06:18:27.902231 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.902242 | orchestrator | 2026-02-15 06:18:27.902253 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:18:27.902263 | orchestrator | Sunday 15 February 2026 06:18:18 +0000 (0:00:01.135) 0:24:56.214 ******* 2026-02-15 06:18:27.902274 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.902285 | orchestrator | 2026-02-15 06:18:27.902295 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:18:27.902306 | orchestrator | Sunday 15 February 2026 06:18:19 +0000 (0:00:01.171) 0:24:57.386 ******* 2026-02-15 06:18:27.902317 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:18:27.902327 | orchestrator | 2026-02-15 06:18:27.902338 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:18:27.902349 | orchestrator | Sunday 15 February 2026 06:18:21 +0000 (0:00:02.627) 0:25:00.014 ******* 2026-02-15 06:18:27.902359 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:18:27.902370 | orchestrator | 2026-02-15 06:18:27.902381 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:18:27.902400 | orchestrator | Sunday 15 February 2026 06:18:23 +0000 (0:00:01.165) 0:25:01.180 ******* 2026-02-15 06:18:27.902411 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0 2026-02-15 06:18:27.902455 | orchestrator | 2026-02-15 06:18:27.902466 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:18:27.902477 | orchestrator | Sunday 15 February 2026 06:18:24 +0000 (0:00:01.286) 0:25:02.466 ******* 2026-02-15 06:18:27.902487 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.902498 | orchestrator | 2026-02-15 06:18:27.902508 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:18:27.902519 | orchestrator | Sunday 15 February 2026 06:18:25 +0000 (0:00:01.184) 0:25:03.651 ******* 2026-02-15 06:18:27.902530 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.902540 | orchestrator | 2026-02-15 06:18:27.902551 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:18:27.902562 | orchestrator | Sunday 15 February 2026 06:18:26 +0000 (0:00:01.164) 0:25:04.816 ******* 2026-02-15 06:18:27.902572 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:18:27.902583 | orchestrator | 2026-02-15 06:18:27.902602 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:19:11.777459 | orchestrator | Sunday 15 February 2026 06:18:27 +0000 (0:00:01.174) 0:25:05.990 ******* 2026-02-15 06:19:11.777602 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.777621 | orchestrator | 2026-02-15 06:19:11.777634 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:19:11.777645 | orchestrator | Sunday 15 February 2026 06:18:29 +0000 (0:00:01.135) 0:25:07.126 ******* 2026-02-15 06:19:11.777657 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.777668 | orchestrator | 2026-02-15 06:19:11.777679 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:19:11.777690 | orchestrator | Sunday 15 February 2026 06:18:30 +0000 (0:00:01.160) 0:25:08.286 ******* 2026-02-15 06:19:11.777701 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.777712 | orchestrator | 2026-02-15 06:19:11.777722 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:19:11.777733 | orchestrator | Sunday 15 February 2026 06:18:31 +0000 (0:00:01.186) 0:25:09.472 ******* 2026-02-15 06:19:11.777744 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.777755 | orchestrator | 2026-02-15 06:19:11.777766 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:19:11.777776 | orchestrator | Sunday 15 February 2026 06:18:32 +0000 (0:00:01.208) 0:25:10.681 ******* 2026-02-15 06:19:11.777787 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.777798 | orchestrator | 2026-02-15 06:19:11.777809 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:19:11.777820 | orchestrator | Sunday 15 February 2026 06:18:33 +0000 (0:00:01.272) 0:25:11.954 ******* 2026-02-15 06:19:11.777831 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:19:11.777843 | orchestrator | 2026-02-15 06:19:11.777854 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:19:11.777864 | orchestrator | Sunday 15 February 2026 06:18:34 +0000 (0:00:01.143) 0:25:13.098 ******* 2026-02-15 06:19:11.777875 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0 2026-02-15 06:19:11.777887 | orchestrator | 2026-02-15 06:19:11.777898 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:19:11.777908 | orchestrator | Sunday 15 February 2026 06:18:36 +0000 (0:00:01.115) 0:25:14.214 ******* 2026-02-15 06:19:11.777919 | orchestrator | ok: [testbed-node-0] => (item=/etc/ceph) 2026-02-15 06:19:11.777931 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/) 2026-02-15 06:19:11.777942 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-02-15 06:19:11.777955 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-02-15 06:19:11.777992 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-02-15 06:19:11.778005 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-02-15 06:19:11.778157 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-02-15 06:19:11.778178 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:19:11.778190 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:19:11.778203 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:19:11.778216 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:19:11.778228 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:19:11.778255 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:19:11.778268 | orchestrator | ok: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:19:11.778280 | orchestrator | ok: [testbed-node-0] => (item=/var/run/ceph) 2026-02-15 06:19:11.778293 | orchestrator | ok: [testbed-node-0] => (item=/var/log/ceph) 2026-02-15 06:19:11.778306 | orchestrator | 2026-02-15 06:19:11.778316 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:19:11.778327 | orchestrator | Sunday 15 February 2026 06:18:42 +0000 (0:00:06.843) 0:25:21.057 ******* 2026-02-15 06:19:11.778368 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778381 | orchestrator | 2026-02-15 06:19:11.778392 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:19:11.778403 | orchestrator | Sunday 15 February 2026 06:18:44 +0000 (0:00:01.174) 0:25:22.232 ******* 2026-02-15 06:19:11.778413 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778425 | orchestrator | 2026-02-15 06:19:11.778436 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:19:11.778447 | orchestrator | Sunday 15 February 2026 06:18:45 +0000 (0:00:01.201) 0:25:23.433 ******* 2026-02-15 06:19:11.778457 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778468 | orchestrator | 2026-02-15 06:19:11.778479 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:19:11.778490 | orchestrator | Sunday 15 February 2026 06:18:46 +0000 (0:00:01.148) 0:25:24.581 ******* 2026-02-15 06:19:11.778500 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778511 | orchestrator | 2026-02-15 06:19:11.778522 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:19:11.778532 | orchestrator | Sunday 15 February 2026 06:18:47 +0000 (0:00:01.169) 0:25:25.751 ******* 2026-02-15 06:19:11.778543 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778554 | orchestrator | 2026-02-15 06:19:11.778564 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:19:11.778575 | orchestrator | Sunday 15 February 2026 06:18:48 +0000 (0:00:01.129) 0:25:26.880 ******* 2026-02-15 06:19:11.778586 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778596 | orchestrator | 2026-02-15 06:19:11.778607 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:19:11.778618 | orchestrator | Sunday 15 February 2026 06:18:49 +0000 (0:00:01.139) 0:25:28.020 ******* 2026-02-15 06:19:11.778628 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778639 | orchestrator | 2026-02-15 06:19:11.778671 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:19:11.778683 | orchestrator | Sunday 15 February 2026 06:18:51 +0000 (0:00:01.193) 0:25:29.214 ******* 2026-02-15 06:19:11.778694 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778704 | orchestrator | 2026-02-15 06:19:11.778715 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:19:11.778726 | orchestrator | Sunday 15 February 2026 06:18:52 +0000 (0:00:01.127) 0:25:30.341 ******* 2026-02-15 06:19:11.778737 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778748 | orchestrator | 2026-02-15 06:19:11.778759 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:19:11.778782 | orchestrator | Sunday 15 February 2026 06:18:53 +0000 (0:00:01.157) 0:25:31.499 ******* 2026-02-15 06:19:11.778793 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778804 | orchestrator | 2026-02-15 06:19:11.778814 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:19:11.778825 | orchestrator | Sunday 15 February 2026 06:18:54 +0000 (0:00:01.193) 0:25:32.693 ******* 2026-02-15 06:19:11.778836 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778847 | orchestrator | 2026-02-15 06:19:11.778858 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:19:11.778868 | orchestrator | Sunday 15 February 2026 06:18:55 +0000 (0:00:01.230) 0:25:33.923 ******* 2026-02-15 06:19:11.778879 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778890 | orchestrator | 2026-02-15 06:19:11.778901 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:19:11.778911 | orchestrator | Sunday 15 February 2026 06:18:56 +0000 (0:00:01.114) 0:25:35.038 ******* 2026-02-15 06:19:11.778922 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778933 | orchestrator | 2026-02-15 06:19:11.778944 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:19:11.778954 | orchestrator | Sunday 15 February 2026 06:18:58 +0000 (0:00:01.222) 0:25:36.260 ******* 2026-02-15 06:19:11.778965 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.778976 | orchestrator | 2026-02-15 06:19:11.778987 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:19:11.778998 | orchestrator | Sunday 15 February 2026 06:18:59 +0000 (0:00:01.195) 0:25:37.456 ******* 2026-02-15 06:19:11.779008 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.779019 | orchestrator | 2026-02-15 06:19:11.779030 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:19:11.779040 | orchestrator | Sunday 15 February 2026 06:19:00 +0000 (0:00:01.262) 0:25:38.718 ******* 2026-02-15 06:19:11.779051 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.779062 | orchestrator | 2026-02-15 06:19:11.779073 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:19:11.779083 | orchestrator | Sunday 15 February 2026 06:19:01 +0000 (0:00:01.096) 0:25:39.815 ******* 2026-02-15 06:19:11.779094 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.779104 | orchestrator | 2026-02-15 06:19:11.779115 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:19:11.779128 | orchestrator | Sunday 15 February 2026 06:19:02 +0000 (0:00:01.143) 0:25:40.958 ******* 2026-02-15 06:19:11.779139 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.779149 | orchestrator | 2026-02-15 06:19:11.779160 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:19:11.779171 | orchestrator | Sunday 15 February 2026 06:19:03 +0000 (0:00:01.125) 0:25:42.084 ******* 2026-02-15 06:19:11.779182 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.779192 | orchestrator | 2026-02-15 06:19:11.779203 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:19:11.779214 | orchestrator | Sunday 15 February 2026 06:19:05 +0000 (0:00:01.178) 0:25:43.262 ******* 2026-02-15 06:19:11.779225 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.779235 | orchestrator | 2026-02-15 06:19:11.779246 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:19:11.779257 | orchestrator | Sunday 15 February 2026 06:19:06 +0000 (0:00:01.170) 0:25:44.433 ******* 2026-02-15 06:19:11.779267 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.779278 | orchestrator | 2026-02-15 06:19:11.779289 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:19:11.779299 | orchestrator | Sunday 15 February 2026 06:19:07 +0000 (0:00:01.181) 0:25:45.615 ******* 2026-02-15 06:19:11.779317 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 06:19:11.779328 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-15 06:19:11.779374 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 06:19:11.779404 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.779416 | orchestrator | 2026-02-15 06:19:11.779427 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:19:11.779438 | orchestrator | Sunday 15 February 2026 06:19:08 +0000 (0:00:01.444) 0:25:47.060 ******* 2026-02-15 06:19:11.779449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 06:19:11.779460 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-15 06:19:11.779471 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 06:19:11.779482 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.779492 | orchestrator | 2026-02-15 06:19:11.779503 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:19:11.779514 | orchestrator | Sunday 15 February 2026 06:19:10 +0000 (0:00:01.413) 0:25:48.473 ******* 2026-02-15 06:19:11.779529 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 06:19:11.779540 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-02-15 06:19:11.779551 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 06:19:11.779562 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:19:11.779573 | orchestrator | 2026-02-15 06:19:11.779591 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:20:26.080692 | orchestrator | Sunday 15 February 2026 06:19:11 +0000 (0:00:01.391) 0:25:49.865 ******* 2026-02-15 06:20:26.080808 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:20:26.080825 | orchestrator | 2026-02-15 06:20:26.080838 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:20:26.080849 | orchestrator | Sunday 15 February 2026 06:19:12 +0000 (0:00:01.167) 0:25:51.032 ******* 2026-02-15 06:20:26.080860 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-02-15 06:20:26.080871 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:20:26.080882 | orchestrator | 2026-02-15 06:20:26.080893 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:20:26.080904 | orchestrator | Sunday 15 February 2026 06:19:14 +0000 (0:00:01.402) 0:25:52.435 ******* 2026-02-15 06:20:26.080915 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:20:26.080926 | orchestrator | 2026-02-15 06:20:26.080937 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:20:26.080947 | orchestrator | Sunday 15 February 2026 06:19:16 +0000 (0:00:01.790) 0:25:54.226 ******* 2026-02-15 06:20:26.080958 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:20:26.080969 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:20:26.080981 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:20:26.080992 | orchestrator | 2026-02-15 06:20:26.081002 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-15 06:20:26.081013 | orchestrator | Sunday 15 February 2026 06:19:17 +0000 (0:00:01.675) 0:25:55.901 ******* 2026-02-15 06:20:26.081069 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0 2026-02-15 06:20:26.081081 | orchestrator | 2026-02-15 06:20:26.081092 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-15 06:20:26.081103 | orchestrator | Sunday 15 February 2026 06:19:19 +0000 (0:00:01.491) 0:25:57.393 ******* 2026-02-15 06:20:26.081114 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:20:26.081125 | orchestrator | 2026-02-15 06:20:26.081136 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-15 06:20:26.081147 | orchestrator | Sunday 15 February 2026 06:19:20 +0000 (0:00:01.553) 0:25:58.946 ******* 2026-02-15 06:20:26.081157 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:20:26.081190 | orchestrator | 2026-02-15 06:20:26.081201 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-15 06:20:26.081238 | orchestrator | Sunday 15 February 2026 06:19:21 +0000 (0:00:01.129) 0:26:00.076 ******* 2026-02-15 06:20:26.081251 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-15 06:20:26.081265 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-15 06:20:26.081277 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-15 06:20:26.081290 | orchestrator | ok: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-02-15 06:20:26.081302 | orchestrator | 2026-02-15 06:20:26.081316 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-15 06:20:26.081328 | orchestrator | Sunday 15 February 2026 06:19:29 +0000 (0:00:07.696) 0:26:07.772 ******* 2026-02-15 06:20:26.081340 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:20:26.081353 | orchestrator | 2026-02-15 06:20:26.081365 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-15 06:20:26.081382 | orchestrator | Sunday 15 February 2026 06:19:30 +0000 (0:00:01.214) 0:26:08.986 ******* 2026-02-15 06:20:26.081395 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-15 06:20:26.081408 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 06:20:26.081421 | orchestrator | 2026-02-15 06:20:26.081433 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-15 06:20:26.081446 | orchestrator | Sunday 15 February 2026 06:19:34 +0000 (0:00:03.261) 0:26:12.247 ******* 2026-02-15 06:20:26.081458 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-02-15 06:20:26.081470 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-02-15 06:20:26.081482 | orchestrator | 2026-02-15 06:20:26.081494 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-15 06:20:26.081507 | orchestrator | Sunday 15 February 2026 06:19:36 +0000 (0:00:02.072) 0:26:14.320 ******* 2026-02-15 06:20:26.081519 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:20:26.081531 | orchestrator | 2026-02-15 06:20:26.081544 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-15 06:20:26.081556 | orchestrator | Sunday 15 February 2026 06:19:37 +0000 (0:00:01.517) 0:26:15.838 ******* 2026-02-15 06:20:26.081569 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:20:26.081581 | orchestrator | 2026-02-15 06:20:26.081592 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-15 06:20:26.081602 | orchestrator | Sunday 15 February 2026 06:19:38 +0000 (0:00:01.229) 0:26:17.067 ******* 2026-02-15 06:20:26.081613 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:20:26.081624 | orchestrator | 2026-02-15 06:20:26.081634 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-15 06:20:26.081645 | orchestrator | Sunday 15 February 2026 06:19:40 +0000 (0:00:01.133) 0:26:18.201 ******* 2026-02-15 06:20:26.081656 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0 2026-02-15 06:20:26.081667 | orchestrator | 2026-02-15 06:20:26.081677 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-15 06:20:26.081688 | orchestrator | Sunday 15 February 2026 06:19:41 +0000 (0:00:01.537) 0:26:19.738 ******* 2026-02-15 06:20:26.081700 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:20:26.081711 | orchestrator | 2026-02-15 06:20:26.081722 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-15 06:20:26.081732 | orchestrator | Sunday 15 February 2026 06:19:42 +0000 (0:00:01.143) 0:26:20.882 ******* 2026-02-15 06:20:26.081743 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:20:26.081754 | orchestrator | 2026-02-15 06:20:26.081765 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-15 06:20:26.081793 | orchestrator | Sunday 15 February 2026 06:19:43 +0000 (0:00:01.171) 0:26:22.054 ******* 2026-02-15 06:20:26.081804 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0 2026-02-15 06:20:26.081815 | orchestrator | 2026-02-15 06:20:26.081833 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-15 06:20:26.081844 | orchestrator | Sunday 15 February 2026 06:19:45 +0000 (0:00:01.493) 0:26:23.548 ******* 2026-02-15 06:20:26.081855 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:20:26.081866 | orchestrator | 2026-02-15 06:20:26.081876 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-15 06:20:26.081887 | orchestrator | Sunday 15 February 2026 06:19:47 +0000 (0:00:02.113) 0:26:25.661 ******* 2026-02-15 06:20:26.081898 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:20:26.081909 | orchestrator | 2026-02-15 06:20:26.081920 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-15 06:20:26.081930 | orchestrator | Sunday 15 February 2026 06:19:49 +0000 (0:00:02.027) 0:26:27.689 ******* 2026-02-15 06:20:26.081941 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:20:26.081952 | orchestrator | 2026-02-15 06:20:26.081962 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-15 06:20:26.081973 | orchestrator | Sunday 15 February 2026 06:19:52 +0000 (0:00:02.421) 0:26:30.111 ******* 2026-02-15 06:20:26.081984 | orchestrator | changed: [testbed-node-0] 2026-02-15 06:20:26.081995 | orchestrator | 2026-02-15 06:20:26.082005 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-15 06:20:26.082083 | orchestrator | Sunday 15 February 2026 06:19:55 +0000 (0:00:03.800) 0:26:33.911 ******* 2026-02-15 06:20:26.082096 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:20:26.082107 | orchestrator | 2026-02-15 06:20:26.082118 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-15 06:20:26.082128 | orchestrator | 2026-02-15 06:20:26.082139 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-15 06:20:26.082149 | orchestrator | Sunday 15 February 2026 06:19:57 +0000 (0:00:01.294) 0:26:35.206 ******* 2026-02-15 06:20:26.082160 | orchestrator | changed: [testbed-node-1] 2026-02-15 06:20:26.082171 | orchestrator | 2026-02-15 06:20:26.082181 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-15 06:20:26.082192 | orchestrator | Sunday 15 February 2026 06:20:09 +0000 (0:00:12.590) 0:26:47.796 ******* 2026-02-15 06:20:26.082202 | orchestrator | changed: [testbed-node-1] 2026-02-15 06:20:26.082236 | orchestrator | 2026-02-15 06:20:26.082247 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:20:26.082258 | orchestrator | Sunday 15 February 2026 06:20:11 +0000 (0:00:02.224) 0:26:50.020 ******* 2026-02-15 06:20:26.082268 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-1 2026-02-15 06:20:26.082279 | orchestrator | 2026-02-15 06:20:26.082290 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:20:26.082301 | orchestrator | Sunday 15 February 2026 06:20:13 +0000 (0:00:01.125) 0:26:51.145 ******* 2026-02-15 06:20:26.082312 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:26.082322 | orchestrator | 2026-02-15 06:20:26.082333 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:20:26.082344 | orchestrator | Sunday 15 February 2026 06:20:14 +0000 (0:00:01.441) 0:26:52.586 ******* 2026-02-15 06:20:26.082355 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:26.082365 | orchestrator | 2026-02-15 06:20:26.082383 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:20:26.082394 | orchestrator | Sunday 15 February 2026 06:20:15 +0000 (0:00:01.191) 0:26:53.778 ******* 2026-02-15 06:20:26.082405 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:26.082416 | orchestrator | 2026-02-15 06:20:26.082426 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:20:26.082437 | orchestrator | Sunday 15 February 2026 06:20:17 +0000 (0:00:01.440) 0:26:55.219 ******* 2026-02-15 06:20:26.082448 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:26.082458 | orchestrator | 2026-02-15 06:20:26.082469 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:20:26.082480 | orchestrator | Sunday 15 February 2026 06:20:18 +0000 (0:00:01.241) 0:26:56.460 ******* 2026-02-15 06:20:26.082498 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:26.082509 | orchestrator | 2026-02-15 06:20:26.082520 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:20:26.082530 | orchestrator | Sunday 15 February 2026 06:20:19 +0000 (0:00:01.178) 0:26:57.639 ******* 2026-02-15 06:20:26.082541 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:26.082552 | orchestrator | 2026-02-15 06:20:26.082562 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:20:26.082573 | orchestrator | Sunday 15 February 2026 06:20:20 +0000 (0:00:01.194) 0:26:58.833 ******* 2026-02-15 06:20:26.082584 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:26.082594 | orchestrator | 2026-02-15 06:20:26.082605 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:20:26.082616 | orchestrator | Sunday 15 February 2026 06:20:21 +0000 (0:00:01.128) 0:26:59.962 ******* 2026-02-15 06:20:26.082626 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:26.082637 | orchestrator | 2026-02-15 06:20:26.082647 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:20:26.082658 | orchestrator | Sunday 15 February 2026 06:20:23 +0000 (0:00:01.179) 0:27:01.141 ******* 2026-02-15 06:20:26.082669 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:20:26.082679 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:20:26.082690 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:20:26.082701 | orchestrator | 2026-02-15 06:20:26.082711 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:20:26.082722 | orchestrator | Sunday 15 February 2026 06:20:24 +0000 (0:00:01.715) 0:27:02.857 ******* 2026-02-15 06:20:26.082732 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:26.082743 | orchestrator | 2026-02-15 06:20:26.082754 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:20:26.082772 | orchestrator | Sunday 15 February 2026 06:20:26 +0000 (0:00:01.314) 0:27:04.172 ******* 2026-02-15 06:20:50.910770 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:20:50.910887 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:20:50.910904 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:20:50.910916 | orchestrator | 2026-02-15 06:20:50.910928 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:20:50.910940 | orchestrator | Sunday 15 February 2026 06:20:29 +0000 (0:00:02.940) 0:27:07.112 ******* 2026-02-15 06:20:50.910951 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 06:20:50.910963 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 06:20:50.910973 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 06:20:50.910984 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.910995 | orchestrator | 2026-02-15 06:20:50.911006 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:20:50.911017 | orchestrator | Sunday 15 February 2026 06:20:30 +0000 (0:00:01.445) 0:27:08.558 ******* 2026-02-15 06:20:50.911030 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:20:50.911044 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:20:50.911055 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:20:50.911091 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911103 | orchestrator | 2026-02-15 06:20:50.911114 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:20:50.911125 | orchestrator | Sunday 15 February 2026 06:20:32 +0000 (0:00:01.639) 0:27:10.198 ******* 2026-02-15 06:20:50.911138 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:20:50.911166 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:20:50.911211 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:20:50.911223 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911234 | orchestrator | 2026-02-15 06:20:50.911245 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:20:50.911255 | orchestrator | Sunday 15 February 2026 06:20:33 +0000 (0:00:01.255) 0:27:11.453 ******* 2026-02-15 06:20:50.911269 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:20:26.634232', 'end': '2026-02-15 06:20:26.691582', 'delta': '0:00:00.057350', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:20:50.911302 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:20:27.218653', 'end': '2026-02-15 06:20:27.260977', 'delta': '0:00:00.042324', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:20:50.911315 | orchestrator | ok: [testbed-node-1] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:20:27.806503', 'end': '2026-02-15 06:20:27.856784', 'delta': '0:00:00.050281', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:20:50.911335 | orchestrator | 2026-02-15 06:20:50.911346 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:20:50.911357 | orchestrator | Sunday 15 February 2026 06:20:34 +0000 (0:00:01.241) 0:27:12.695 ******* 2026-02-15 06:20:50.911368 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:50.911379 | orchestrator | 2026-02-15 06:20:50.911390 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:20:50.911400 | orchestrator | Sunday 15 February 2026 06:20:35 +0000 (0:00:01.320) 0:27:14.016 ******* 2026-02-15 06:20:50.911411 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911422 | orchestrator | 2026-02-15 06:20:50.911433 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:20:50.911443 | orchestrator | Sunday 15 February 2026 06:20:37 +0000 (0:00:01.249) 0:27:15.265 ******* 2026-02-15 06:20:50.911454 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:50.911465 | orchestrator | 2026-02-15 06:20:50.911475 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:20:50.911486 | orchestrator | Sunday 15 February 2026 06:20:38 +0000 (0:00:01.150) 0:27:16.416 ******* 2026-02-15 06:20:50.911497 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:20:50.911507 | orchestrator | 2026-02-15 06:20:50.911524 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:20:50.911535 | orchestrator | Sunday 15 February 2026 06:20:40 +0000 (0:00:01.973) 0:27:18.389 ******* 2026-02-15 06:20:50.911546 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:20:50.911556 | orchestrator | 2026-02-15 06:20:50.911567 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:20:50.911578 | orchestrator | Sunday 15 February 2026 06:20:41 +0000 (0:00:01.211) 0:27:19.600 ******* 2026-02-15 06:20:50.911589 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911599 | orchestrator | 2026-02-15 06:20:50.911610 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:20:50.911621 | orchestrator | Sunday 15 February 2026 06:20:42 +0000 (0:00:01.192) 0:27:20.793 ******* 2026-02-15 06:20:50.911631 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911642 | orchestrator | 2026-02-15 06:20:50.911652 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:20:50.911663 | orchestrator | Sunday 15 February 2026 06:20:43 +0000 (0:00:01.230) 0:27:22.023 ******* 2026-02-15 06:20:50.911674 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911684 | orchestrator | 2026-02-15 06:20:50.911695 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:20:50.911706 | orchestrator | Sunday 15 February 2026 06:20:45 +0000 (0:00:01.183) 0:27:23.206 ******* 2026-02-15 06:20:50.911716 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911727 | orchestrator | 2026-02-15 06:20:50.911737 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:20:50.911748 | orchestrator | Sunday 15 February 2026 06:20:46 +0000 (0:00:01.136) 0:27:24.343 ******* 2026-02-15 06:20:50.911759 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911770 | orchestrator | 2026-02-15 06:20:50.911780 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:20:50.911791 | orchestrator | Sunday 15 February 2026 06:20:47 +0000 (0:00:01.204) 0:27:25.548 ******* 2026-02-15 06:20:50.911802 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911812 | orchestrator | 2026-02-15 06:20:50.911823 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:20:50.911833 | orchestrator | Sunday 15 February 2026 06:20:48 +0000 (0:00:01.141) 0:27:26.689 ******* 2026-02-15 06:20:50.911850 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911862 | orchestrator | 2026-02-15 06:20:50.911872 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:20:50.911883 | orchestrator | Sunday 15 February 2026 06:20:49 +0000 (0:00:01.142) 0:27:27.831 ******* 2026-02-15 06:20:50.911894 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:50.911905 | orchestrator | 2026-02-15 06:20:50.911916 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:20:50.911934 | orchestrator | Sunday 15 February 2026 06:20:50 +0000 (0:00:01.172) 0:27:29.004 ******* 2026-02-15 06:20:54.634904 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:54.635014 | orchestrator | 2026-02-15 06:20:54.635040 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:20:54.635062 | orchestrator | Sunday 15 February 2026 06:20:52 +0000 (0:00:01.158) 0:27:30.163 ******* 2026-02-15 06:20:54.635084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:20:54.635108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:20:54.635129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:20:54.635143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:20:54.635216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:20:54.635231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:20:54.635243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:20:54.635302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47bb0aa1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:20:54.635317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:20:54.635335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:20:54.635347 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:20:54.635358 | orchestrator | 2026-02-15 06:20:54.635369 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:20:54.635381 | orchestrator | Sunday 15 February 2026 06:20:53 +0000 (0:00:01.296) 0:27:31.459 ******* 2026-02-15 06:20:54.635393 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:20:54.635413 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:20:54.635433 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:21:05.390574 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-34-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:21:05.390718 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:21:05.390766 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:21:05.390780 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:21:05.390839 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '47bb0aa1', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1', 'scsi-SQEMU_QEMU_HARDDISK_47bb0aa1-854d-4042-a0dd-8afa6c7f18e0-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:21:05.390854 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:21:05.390872 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:21:05.390885 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:05.390898 | orchestrator | 2026-02-15 06:21:05.390910 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:21:05.390922 | orchestrator | Sunday 15 February 2026 06:20:54 +0000 (0:00:01.273) 0:27:32.732 ******* 2026-02-15 06:21:05.390933 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:05.390953 | orchestrator | 2026-02-15 06:21:05.390965 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:21:05.390976 | orchestrator | Sunday 15 February 2026 06:20:56 +0000 (0:00:01.574) 0:27:34.307 ******* 2026-02-15 06:21:05.390986 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:05.390997 | orchestrator | 2026-02-15 06:21:05.391008 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:21:05.391019 | orchestrator | Sunday 15 February 2026 06:20:57 +0000 (0:00:01.110) 0:27:35.418 ******* 2026-02-15 06:21:05.391029 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:05.391040 | orchestrator | 2026-02-15 06:21:05.391051 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:21:05.391064 | orchestrator | Sunday 15 February 2026 06:20:58 +0000 (0:00:01.519) 0:27:36.937 ******* 2026-02-15 06:21:05.391078 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:05.391090 | orchestrator | 2026-02-15 06:21:05.391103 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:21:05.391116 | orchestrator | Sunday 15 February 2026 06:21:00 +0000 (0:00:01.202) 0:27:38.140 ******* 2026-02-15 06:21:05.391129 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:05.391142 | orchestrator | 2026-02-15 06:21:05.391213 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:21:05.391231 | orchestrator | Sunday 15 February 2026 06:21:01 +0000 (0:00:01.256) 0:27:39.396 ******* 2026-02-15 06:21:05.391243 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:05.391256 | orchestrator | 2026-02-15 06:21:05.391268 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:21:05.391280 | orchestrator | Sunday 15 February 2026 06:21:02 +0000 (0:00:01.159) 0:27:40.556 ******* 2026-02-15 06:21:05.391293 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-02-15 06:21:05.391305 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:21:05.391318 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-02-15 06:21:05.391331 | orchestrator | 2026-02-15 06:21:05.391343 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:21:05.391356 | orchestrator | Sunday 15 February 2026 06:21:04 +0000 (0:00:01.748) 0:27:42.305 ******* 2026-02-15 06:21:05.391369 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-02-15 06:21:05.391381 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-02-15 06:21:05.391394 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-02-15 06:21:05.391406 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:05.391417 | orchestrator | 2026-02-15 06:21:05.391444 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:21:43.304671 | orchestrator | Sunday 15 February 2026 06:21:05 +0000 (0:00:01.176) 0:27:43.481 ******* 2026-02-15 06:21:43.304789 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.304807 | orchestrator | 2026-02-15 06:21:43.304820 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:21:43.304831 | orchestrator | Sunday 15 February 2026 06:21:06 +0000 (0:00:01.143) 0:27:44.625 ******* 2026-02-15 06:21:43.304842 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:21:43.304854 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:21:43.304866 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:21:43.304877 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:21:43.304887 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:21:43.304898 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:21:43.304909 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:21:43.304920 | orchestrator | 2026-02-15 06:21:43.304931 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:21:43.304969 | orchestrator | Sunday 15 February 2026 06:21:08 +0000 (0:00:02.170) 0:27:46.795 ******* 2026-02-15 06:21:43.304981 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:21:43.304992 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:21:43.305002 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:21:43.305013 | orchestrator | ok: [testbed-node-1 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:21:43.305023 | orchestrator | ok: [testbed-node-1 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:21:43.305034 | orchestrator | ok: [testbed-node-1 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:21:43.305045 | orchestrator | ok: [testbed-node-1 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:21:43.305055 | orchestrator | 2026-02-15 06:21:43.305066 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:21:43.305076 | orchestrator | Sunday 15 February 2026 06:21:10 +0000 (0:00:02.242) 0:27:49.038 ******* 2026-02-15 06:21:43.305087 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-1 2026-02-15 06:21:43.305141 | orchestrator | 2026-02-15 06:21:43.305261 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:21:43.305278 | orchestrator | Sunday 15 February 2026 06:21:12 +0000 (0:00:01.220) 0:27:50.259 ******* 2026-02-15 06:21:43.305290 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-1 2026-02-15 06:21:43.305302 | orchestrator | 2026-02-15 06:21:43.305315 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:21:43.305328 | orchestrator | Sunday 15 February 2026 06:21:13 +0000 (0:00:01.168) 0:27:51.428 ******* 2026-02-15 06:21:43.305340 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:43.305352 | orchestrator | 2026-02-15 06:21:43.305365 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:21:43.305377 | orchestrator | Sunday 15 February 2026 06:21:14 +0000 (0:00:01.598) 0:27:53.027 ******* 2026-02-15 06:21:43.305390 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.305402 | orchestrator | 2026-02-15 06:21:43.305414 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:21:43.305427 | orchestrator | Sunday 15 February 2026 06:21:16 +0000 (0:00:01.109) 0:27:54.137 ******* 2026-02-15 06:21:43.305440 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.305452 | orchestrator | 2026-02-15 06:21:43.305464 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:21:43.305477 | orchestrator | Sunday 15 February 2026 06:21:17 +0000 (0:00:01.116) 0:27:55.253 ******* 2026-02-15 06:21:43.305489 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.305501 | orchestrator | 2026-02-15 06:21:43.305514 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:21:43.305526 | orchestrator | Sunday 15 February 2026 06:21:18 +0000 (0:00:01.279) 0:27:56.532 ******* 2026-02-15 06:21:43.305537 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:43.305549 | orchestrator | 2026-02-15 06:21:43.305561 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:21:43.305574 | orchestrator | Sunday 15 February 2026 06:21:20 +0000 (0:00:01.583) 0:27:58.116 ******* 2026-02-15 06:21:43.305584 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.305595 | orchestrator | 2026-02-15 06:21:43.305606 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:21:43.305616 | orchestrator | Sunday 15 February 2026 06:21:21 +0000 (0:00:01.139) 0:27:59.256 ******* 2026-02-15 06:21:43.305627 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.305638 | orchestrator | 2026-02-15 06:21:43.305648 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:21:43.305672 | orchestrator | Sunday 15 February 2026 06:21:22 +0000 (0:00:01.158) 0:28:00.414 ******* 2026-02-15 06:21:43.305683 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:43.305694 | orchestrator | 2026-02-15 06:21:43.305705 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:21:43.305715 | orchestrator | Sunday 15 February 2026 06:21:23 +0000 (0:00:01.557) 0:28:01.972 ******* 2026-02-15 06:21:43.305726 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:43.305736 | orchestrator | 2026-02-15 06:21:43.305747 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:21:43.305775 | orchestrator | Sunday 15 February 2026 06:21:25 +0000 (0:00:01.801) 0:28:03.774 ******* 2026-02-15 06:21:43.305786 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.305797 | orchestrator | 2026-02-15 06:21:43.305808 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:21:43.305819 | orchestrator | Sunday 15 February 2026 06:21:26 +0000 (0:00:00.794) 0:28:04.569 ******* 2026-02-15 06:21:43.305829 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:43.305840 | orchestrator | 2026-02-15 06:21:43.305851 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:21:43.305861 | orchestrator | Sunday 15 February 2026 06:21:27 +0000 (0:00:00.815) 0:28:05.385 ******* 2026-02-15 06:21:43.305872 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.305882 | orchestrator | 2026-02-15 06:21:43.305893 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:21:43.305904 | orchestrator | Sunday 15 February 2026 06:21:28 +0000 (0:00:00.827) 0:28:06.212 ******* 2026-02-15 06:21:43.305914 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.305925 | orchestrator | 2026-02-15 06:21:43.305935 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:21:43.305945 | orchestrator | Sunday 15 February 2026 06:21:28 +0000 (0:00:00.841) 0:28:07.053 ******* 2026-02-15 06:21:43.305956 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.305967 | orchestrator | 2026-02-15 06:21:43.305977 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:21:43.305988 | orchestrator | Sunday 15 February 2026 06:21:29 +0000 (0:00:00.939) 0:28:07.993 ******* 2026-02-15 06:21:43.305998 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306009 | orchestrator | 2026-02-15 06:21:43.306076 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:21:43.306088 | orchestrator | Sunday 15 February 2026 06:21:30 +0000 (0:00:00.922) 0:28:08.916 ******* 2026-02-15 06:21:43.306117 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306128 | orchestrator | 2026-02-15 06:21:43.306139 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:21:43.306150 | orchestrator | Sunday 15 February 2026 06:21:31 +0000 (0:00:00.874) 0:28:09.790 ******* 2026-02-15 06:21:43.306161 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:43.306172 | orchestrator | 2026-02-15 06:21:43.306183 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:21:43.306194 | orchestrator | Sunday 15 February 2026 06:21:32 +0000 (0:00:00.903) 0:28:10.693 ******* 2026-02-15 06:21:43.306205 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:43.306216 | orchestrator | 2026-02-15 06:21:43.306226 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:21:43.306237 | orchestrator | Sunday 15 February 2026 06:21:33 +0000 (0:00:00.929) 0:28:11.623 ******* 2026-02-15 06:21:43.306248 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:21:43.306259 | orchestrator | 2026-02-15 06:21:43.306270 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:21:43.306287 | orchestrator | Sunday 15 February 2026 06:21:34 +0000 (0:00:00.867) 0:28:12.491 ******* 2026-02-15 06:21:43.306298 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306309 | orchestrator | 2026-02-15 06:21:43.306320 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:21:43.306339 | orchestrator | Sunday 15 February 2026 06:21:35 +0000 (0:00:00.792) 0:28:13.283 ******* 2026-02-15 06:21:43.306350 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306361 | orchestrator | 2026-02-15 06:21:43.306372 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:21:43.306382 | orchestrator | Sunday 15 February 2026 06:21:36 +0000 (0:00:00.826) 0:28:14.110 ******* 2026-02-15 06:21:43.306393 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306404 | orchestrator | 2026-02-15 06:21:43.306415 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:21:43.306426 | orchestrator | Sunday 15 February 2026 06:21:36 +0000 (0:00:00.779) 0:28:14.889 ******* 2026-02-15 06:21:43.306437 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306448 | orchestrator | 2026-02-15 06:21:43.306459 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:21:43.306469 | orchestrator | Sunday 15 February 2026 06:21:37 +0000 (0:00:00.810) 0:28:15.699 ******* 2026-02-15 06:21:43.306480 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306491 | orchestrator | 2026-02-15 06:21:43.306502 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:21:43.306513 | orchestrator | Sunday 15 February 2026 06:21:38 +0000 (0:00:00.802) 0:28:16.502 ******* 2026-02-15 06:21:43.306524 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306535 | orchestrator | 2026-02-15 06:21:43.306546 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:21:43.306557 | orchestrator | Sunday 15 February 2026 06:21:39 +0000 (0:00:00.775) 0:28:17.278 ******* 2026-02-15 06:21:43.306567 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306578 | orchestrator | 2026-02-15 06:21:43.306589 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:21:43.306600 | orchestrator | Sunday 15 February 2026 06:21:40 +0000 (0:00:00.876) 0:28:18.154 ******* 2026-02-15 06:21:43.306611 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306622 | orchestrator | 2026-02-15 06:21:43.306633 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:21:43.306643 | orchestrator | Sunday 15 February 2026 06:21:40 +0000 (0:00:00.817) 0:28:18.972 ******* 2026-02-15 06:21:43.306654 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306665 | orchestrator | 2026-02-15 06:21:43.306676 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:21:43.306687 | orchestrator | Sunday 15 February 2026 06:21:41 +0000 (0:00:00.786) 0:28:19.758 ******* 2026-02-15 06:21:43.306698 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306709 | orchestrator | 2026-02-15 06:21:43.306720 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:21:43.306731 | orchestrator | Sunday 15 February 2026 06:21:42 +0000 (0:00:00.830) 0:28:20.589 ******* 2026-02-15 06:21:43.306741 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:21:43.306753 | orchestrator | 2026-02-15 06:21:43.306771 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:22:30.003476 | orchestrator | Sunday 15 February 2026 06:21:43 +0000 (0:00:00.807) 0:28:21.396 ******* 2026-02-15 06:22:30.003599 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.003617 | orchestrator | 2026-02-15 06:22:30.003629 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:22:30.003641 | orchestrator | Sunday 15 February 2026 06:21:44 +0000 (0:00:00.841) 0:28:22.237 ******* 2026-02-15 06:22:30.003653 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:22:30.003664 | orchestrator | 2026-02-15 06:22:30.003676 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:22:30.003687 | orchestrator | Sunday 15 February 2026 06:21:45 +0000 (0:00:01.618) 0:28:23.856 ******* 2026-02-15 06:22:30.003698 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:22:30.003709 | orchestrator | 2026-02-15 06:22:30.003720 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:22:30.003754 | orchestrator | Sunday 15 February 2026 06:21:47 +0000 (0:00:02.121) 0:28:25.978 ******* 2026-02-15 06:22:30.003766 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-1 2026-02-15 06:22:30.003778 | orchestrator | 2026-02-15 06:22:30.003789 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:22:30.003800 | orchestrator | Sunday 15 February 2026 06:21:49 +0000 (0:00:01.168) 0:28:27.146 ******* 2026-02-15 06:22:30.003811 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.003822 | orchestrator | 2026-02-15 06:22:30.003833 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:22:30.003844 | orchestrator | Sunday 15 February 2026 06:21:50 +0000 (0:00:01.143) 0:28:28.290 ******* 2026-02-15 06:22:30.003854 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.003865 | orchestrator | 2026-02-15 06:22:30.003876 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:22:30.003887 | orchestrator | Sunday 15 February 2026 06:21:51 +0000 (0:00:01.183) 0:28:29.474 ******* 2026-02-15 06:22:30.003897 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:22:30.003908 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:22:30.003920 | orchestrator | 2026-02-15 06:22:30.003931 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:22:30.003942 | orchestrator | Sunday 15 February 2026 06:21:53 +0000 (0:00:01.840) 0:28:31.314 ******* 2026-02-15 06:22:30.003953 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:22:30.003964 | orchestrator | 2026-02-15 06:22:30.003975 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:22:30.003986 | orchestrator | Sunday 15 February 2026 06:21:54 +0000 (0:00:01.595) 0:28:32.910 ******* 2026-02-15 06:22:30.004012 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004026 | orchestrator | 2026-02-15 06:22:30.004064 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:22:30.004078 | orchestrator | Sunday 15 February 2026 06:21:56 +0000 (0:00:01.219) 0:28:34.130 ******* 2026-02-15 06:22:30.004090 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004103 | orchestrator | 2026-02-15 06:22:30.004116 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:22:30.004129 | orchestrator | Sunday 15 February 2026 06:21:56 +0000 (0:00:00.801) 0:28:34.931 ******* 2026-02-15 06:22:30.004142 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004155 | orchestrator | 2026-02-15 06:22:30.004168 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:22:30.004181 | orchestrator | Sunday 15 February 2026 06:21:57 +0000 (0:00:00.788) 0:28:35.719 ******* 2026-02-15 06:22:30.004194 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-1 2026-02-15 06:22:30.004206 | orchestrator | 2026-02-15 06:22:30.004219 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:22:30.004232 | orchestrator | Sunday 15 February 2026 06:21:58 +0000 (0:00:01.135) 0:28:36.855 ******* 2026-02-15 06:22:30.004244 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:22:30.004257 | orchestrator | 2026-02-15 06:22:30.004269 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:22:30.004284 | orchestrator | Sunday 15 February 2026 06:22:00 +0000 (0:00:01.715) 0:28:38.570 ******* 2026-02-15 06:22:30.004335 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:22:30.004355 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:22:30.004375 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:22:30.004394 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004412 | orchestrator | 2026-02-15 06:22:30.004423 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:22:30.004446 | orchestrator | Sunday 15 February 2026 06:22:01 +0000 (0:00:01.173) 0:28:39.744 ******* 2026-02-15 06:22:30.004457 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004468 | orchestrator | 2026-02-15 06:22:30.004478 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:22:30.004489 | orchestrator | Sunday 15 February 2026 06:22:02 +0000 (0:00:01.158) 0:28:40.902 ******* 2026-02-15 06:22:30.004500 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004511 | orchestrator | 2026-02-15 06:22:30.004521 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:22:30.004532 | orchestrator | Sunday 15 February 2026 06:22:03 +0000 (0:00:01.178) 0:28:42.081 ******* 2026-02-15 06:22:30.004542 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004553 | orchestrator | 2026-02-15 06:22:30.004564 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:22:30.004575 | orchestrator | Sunday 15 February 2026 06:22:05 +0000 (0:00:01.192) 0:28:43.273 ******* 2026-02-15 06:22:30.004585 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004596 | orchestrator | 2026-02-15 06:22:30.004625 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:22:30.004636 | orchestrator | Sunday 15 February 2026 06:22:06 +0000 (0:00:01.189) 0:28:44.463 ******* 2026-02-15 06:22:30.004647 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004658 | orchestrator | 2026-02-15 06:22:30.004669 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:22:30.004680 | orchestrator | Sunday 15 February 2026 06:22:07 +0000 (0:00:00.831) 0:28:45.294 ******* 2026-02-15 06:22:30.004691 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:22:30.004702 | orchestrator | 2026-02-15 06:22:30.004713 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:22:30.004723 | orchestrator | Sunday 15 February 2026 06:22:09 +0000 (0:00:02.322) 0:28:47.617 ******* 2026-02-15 06:22:30.004734 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:22:30.004744 | orchestrator | 2026-02-15 06:22:30.004755 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:22:30.004765 | orchestrator | Sunday 15 February 2026 06:22:10 +0000 (0:00:00.802) 0:28:48.419 ******* 2026-02-15 06:22:30.004776 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-1 2026-02-15 06:22:30.004787 | orchestrator | 2026-02-15 06:22:30.004797 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:22:30.004808 | orchestrator | Sunday 15 February 2026 06:22:11 +0000 (0:00:01.117) 0:28:49.536 ******* 2026-02-15 06:22:30.004819 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004829 | orchestrator | 2026-02-15 06:22:30.004840 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:22:30.004850 | orchestrator | Sunday 15 February 2026 06:22:12 +0000 (0:00:01.223) 0:28:50.760 ******* 2026-02-15 06:22:30.004861 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004871 | orchestrator | 2026-02-15 06:22:30.004882 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:22:30.004893 | orchestrator | Sunday 15 February 2026 06:22:13 +0000 (0:00:01.197) 0:28:51.958 ******* 2026-02-15 06:22:30.004903 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004914 | orchestrator | 2026-02-15 06:22:30.004925 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:22:30.004935 | orchestrator | Sunday 15 February 2026 06:22:15 +0000 (0:00:01.169) 0:28:53.127 ******* 2026-02-15 06:22:30.004946 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004957 | orchestrator | 2026-02-15 06:22:30.004967 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:22:30.004978 | orchestrator | Sunday 15 February 2026 06:22:16 +0000 (0:00:01.145) 0:28:54.273 ******* 2026-02-15 06:22:30.004988 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.004999 | orchestrator | 2026-02-15 06:22:30.005017 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:22:30.005052 | orchestrator | Sunday 15 February 2026 06:22:17 +0000 (0:00:01.147) 0:28:55.421 ******* 2026-02-15 06:22:30.005064 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.005076 | orchestrator | 2026-02-15 06:22:30.005087 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:22:30.005097 | orchestrator | Sunday 15 February 2026 06:22:18 +0000 (0:00:01.176) 0:28:56.597 ******* 2026-02-15 06:22:30.005108 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.005119 | orchestrator | 2026-02-15 06:22:30.005130 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:22:30.005140 | orchestrator | Sunday 15 February 2026 06:22:19 +0000 (0:00:01.194) 0:28:57.792 ******* 2026-02-15 06:22:30.005151 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:22:30.005162 | orchestrator | 2026-02-15 06:22:30.005173 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:22:30.005183 | orchestrator | Sunday 15 February 2026 06:22:20 +0000 (0:00:01.291) 0:28:59.083 ******* 2026-02-15 06:22:30.005194 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:22:30.005205 | orchestrator | 2026-02-15 06:22:30.005216 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:22:30.005226 | orchestrator | Sunday 15 February 2026 06:22:22 +0000 (0:00:01.094) 0:29:00.178 ******* 2026-02-15 06:22:30.005237 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-1 2026-02-15 06:22:30.005248 | orchestrator | 2026-02-15 06:22:30.005259 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:22:30.005270 | orchestrator | Sunday 15 February 2026 06:22:23 +0000 (0:00:01.307) 0:29:01.486 ******* 2026-02-15 06:22:30.005280 | orchestrator | ok: [testbed-node-1] => (item=/etc/ceph) 2026-02-15 06:22:30.005292 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/) 2026-02-15 06:22:30.005302 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-02-15 06:22:30.005313 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-02-15 06:22:30.005324 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-02-15 06:22:30.005334 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-02-15 06:22:30.005345 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-02-15 06:22:30.005356 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:22:30.005366 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:22:30.005377 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:22:30.005388 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:22:30.005399 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:22:30.005410 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:22:30.005420 | orchestrator | ok: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:22:30.005431 | orchestrator | ok: [testbed-node-1] => (item=/var/run/ceph) 2026-02-15 06:22:30.005442 | orchestrator | ok: [testbed-node-1] => (item=/var/log/ceph) 2026-02-15 06:22:30.005453 | orchestrator | 2026-02-15 06:22:30.005469 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:23:13.257103 | orchestrator | Sunday 15 February 2026 06:22:29 +0000 (0:00:06.601) 0:29:08.087 ******* 2026-02-15 06:23:13.257221 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257239 | orchestrator | 2026-02-15 06:23:13.257251 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:23:13.257263 | orchestrator | Sunday 15 February 2026 06:22:30 +0000 (0:00:00.813) 0:29:08.901 ******* 2026-02-15 06:23:13.257274 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257285 | orchestrator | 2026-02-15 06:23:13.257296 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:23:13.257333 | orchestrator | Sunday 15 February 2026 06:22:31 +0000 (0:00:00.777) 0:29:09.678 ******* 2026-02-15 06:23:13.257344 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257355 | orchestrator | 2026-02-15 06:23:13.257366 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:23:13.257377 | orchestrator | Sunday 15 February 2026 06:22:32 +0000 (0:00:00.809) 0:29:10.487 ******* 2026-02-15 06:23:13.257388 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257399 | orchestrator | 2026-02-15 06:23:13.257409 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:23:13.257420 | orchestrator | Sunday 15 February 2026 06:22:33 +0000 (0:00:00.832) 0:29:11.320 ******* 2026-02-15 06:23:13.257431 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257441 | orchestrator | 2026-02-15 06:23:13.257452 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:23:13.257463 | orchestrator | Sunday 15 February 2026 06:22:34 +0000 (0:00:00.812) 0:29:12.133 ******* 2026-02-15 06:23:13.257474 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257484 | orchestrator | 2026-02-15 06:23:13.257495 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:23:13.257507 | orchestrator | Sunday 15 February 2026 06:22:34 +0000 (0:00:00.835) 0:29:12.969 ******* 2026-02-15 06:23:13.257517 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257528 | orchestrator | 2026-02-15 06:23:13.257539 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:23:13.257550 | orchestrator | Sunday 15 February 2026 06:22:35 +0000 (0:00:00.769) 0:29:13.738 ******* 2026-02-15 06:23:13.257560 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257571 | orchestrator | 2026-02-15 06:23:13.257581 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:23:13.257592 | orchestrator | Sunday 15 February 2026 06:22:36 +0000 (0:00:00.789) 0:29:14.528 ******* 2026-02-15 06:23:13.257606 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257619 | orchestrator | 2026-02-15 06:23:13.257645 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:23:13.257658 | orchestrator | Sunday 15 February 2026 06:22:37 +0000 (0:00:00.840) 0:29:15.369 ******* 2026-02-15 06:23:13.257670 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257683 | orchestrator | 2026-02-15 06:23:13.257695 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:23:13.257708 | orchestrator | Sunday 15 February 2026 06:22:38 +0000 (0:00:00.889) 0:29:16.259 ******* 2026-02-15 06:23:13.257720 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257732 | orchestrator | 2026-02-15 06:23:13.257744 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:23:13.257757 | orchestrator | Sunday 15 February 2026 06:22:38 +0000 (0:00:00.814) 0:29:17.073 ******* 2026-02-15 06:23:13.257769 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257781 | orchestrator | 2026-02-15 06:23:13.257793 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:23:13.257806 | orchestrator | Sunday 15 February 2026 06:22:39 +0000 (0:00:00.820) 0:29:17.894 ******* 2026-02-15 06:23:13.257820 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257832 | orchestrator | 2026-02-15 06:23:13.257844 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:23:13.257857 | orchestrator | Sunday 15 February 2026 06:22:40 +0000 (0:00:01.006) 0:29:18.900 ******* 2026-02-15 06:23:13.257869 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257882 | orchestrator | 2026-02-15 06:23:13.257894 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:23:13.257906 | orchestrator | Sunday 15 February 2026 06:22:41 +0000 (0:00:00.775) 0:29:19.676 ******* 2026-02-15 06:23:13.257919 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.257939 | orchestrator | 2026-02-15 06:23:13.257952 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:23:13.257963 | orchestrator | Sunday 15 February 2026 06:22:42 +0000 (0:00:00.902) 0:29:20.578 ******* 2026-02-15 06:23:13.257974 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258110 | orchestrator | 2026-02-15 06:23:13.258122 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:23:13.258133 | orchestrator | Sunday 15 February 2026 06:22:43 +0000 (0:00:00.777) 0:29:21.356 ******* 2026-02-15 06:23:13.258144 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258155 | orchestrator | 2026-02-15 06:23:13.258165 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:23:13.258178 | orchestrator | Sunday 15 February 2026 06:22:44 +0000 (0:00:00.773) 0:29:22.129 ******* 2026-02-15 06:23:13.258189 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258200 | orchestrator | 2026-02-15 06:23:13.258210 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:23:13.258221 | orchestrator | Sunday 15 February 2026 06:22:44 +0000 (0:00:00.781) 0:29:22.911 ******* 2026-02-15 06:23:13.258232 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258242 | orchestrator | 2026-02-15 06:23:13.258253 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:23:13.258264 | orchestrator | Sunday 15 February 2026 06:22:45 +0000 (0:00:00.822) 0:29:23.733 ******* 2026-02-15 06:23:13.258275 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258286 | orchestrator | 2026-02-15 06:23:13.258315 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:23:13.258327 | orchestrator | Sunday 15 February 2026 06:22:46 +0000 (0:00:00.810) 0:29:24.544 ******* 2026-02-15 06:23:13.258338 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258349 | orchestrator | 2026-02-15 06:23:13.258360 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:23:13.258371 | orchestrator | Sunday 15 February 2026 06:22:47 +0000 (0:00:00.828) 0:29:25.372 ******* 2026-02-15 06:23:13.258381 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-15 06:23:13.258392 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-15 06:23:13.258403 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-15 06:23:13.258414 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258424 | orchestrator | 2026-02-15 06:23:13.258435 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:23:13.258446 | orchestrator | Sunday 15 February 2026 06:22:48 +0000 (0:00:01.559) 0:29:26.932 ******* 2026-02-15 06:23:13.258456 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-15 06:23:13.258467 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-15 06:23:13.258478 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-15 06:23:13.258488 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258499 | orchestrator | 2026-02-15 06:23:13.258510 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:23:13.258521 | orchestrator | Sunday 15 February 2026 06:22:50 +0000 (0:00:01.468) 0:29:28.401 ******* 2026-02-15 06:23:13.258531 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-02-15 06:23:13.258542 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-02-15 06:23:13.258553 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-02-15 06:23:13.258563 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258574 | orchestrator | 2026-02-15 06:23:13.258585 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:23:13.258595 | orchestrator | Sunday 15 February 2026 06:22:51 +0000 (0:00:01.688) 0:29:30.089 ******* 2026-02-15 06:23:13.258606 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258617 | orchestrator | 2026-02-15 06:23:13.258636 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:23:13.258647 | orchestrator | Sunday 15 February 2026 06:22:52 +0000 (0:00:00.910) 0:29:30.999 ******* 2026-02-15 06:23:13.258658 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-02-15 06:23:13.258668 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258679 | orchestrator | 2026-02-15 06:23:13.258696 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:23:13.258707 | orchestrator | Sunday 15 February 2026 06:22:53 +0000 (0:00:00.957) 0:29:31.957 ******* 2026-02-15 06:23:13.258718 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:23:13.258729 | orchestrator | 2026-02-15 06:23:13.258739 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:23:13.258750 | orchestrator | Sunday 15 February 2026 06:22:55 +0000 (0:00:01.495) 0:29:33.452 ******* 2026-02-15 06:23:13.258760 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:23:13.258772 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-02-15 06:23:13.258783 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:23:13.258794 | orchestrator | 2026-02-15 06:23:13.258804 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-15 06:23:13.258815 | orchestrator | Sunday 15 February 2026 06:22:56 +0000 (0:00:01.359) 0:29:34.812 ******* 2026-02-15 06:23:13.258825 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-1 2026-02-15 06:23:13.258836 | orchestrator | 2026-02-15 06:23:13.258846 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-15 06:23:13.258857 | orchestrator | Sunday 15 February 2026 06:22:57 +0000 (0:00:01.174) 0:29:35.987 ******* 2026-02-15 06:23:13.258868 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:23:13.258879 | orchestrator | 2026-02-15 06:23:13.258889 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-15 06:23:13.258900 | orchestrator | Sunday 15 February 2026 06:22:59 +0000 (0:00:01.547) 0:29:37.536 ******* 2026-02-15 06:23:13.258910 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:23:13.258921 | orchestrator | 2026-02-15 06:23:13.258932 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-15 06:23:13.258942 | orchestrator | Sunday 15 February 2026 06:23:00 +0000 (0:00:01.172) 0:29:38.708 ******* 2026-02-15 06:23:13.258953 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:23:13.258964 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:23:13.258974 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:23:13.259002 | orchestrator | ok: [testbed-node-1 -> {{ groups[mon_group_name][0] }}] 2026-02-15 06:23:13.259013 | orchestrator | 2026-02-15 06:23:13.259024 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-15 06:23:13.259034 | orchestrator | Sunday 15 February 2026 06:23:08 +0000 (0:00:07.843) 0:29:46.552 ******* 2026-02-15 06:23:13.259045 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:23:13.259056 | orchestrator | 2026-02-15 06:23:13.259067 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-15 06:23:13.259078 | orchestrator | Sunday 15 February 2026 06:23:09 +0000 (0:00:01.193) 0:29:47.745 ******* 2026-02-15 06:23:13.259089 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-15 06:23:13.259100 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-02-15 06:23:13.259110 | orchestrator | 2026-02-15 06:23:13.259128 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-15 06:24:01.246306 | orchestrator | Sunday 15 February 2026 06:23:13 +0000 (0:00:03.602) 0:29:51.347 ******* 2026-02-15 06:24:01.246425 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-02-15 06:24:01.246442 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-02-15 06:24:01.246455 | orchestrator | 2026-02-15 06:24:01.246492 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-15 06:24:01.246504 | orchestrator | Sunday 15 February 2026 06:23:15 +0000 (0:00:02.081) 0:29:53.429 ******* 2026-02-15 06:24:01.246515 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:24:01.246526 | orchestrator | 2026-02-15 06:24:01.246537 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-15 06:24:01.246548 | orchestrator | Sunday 15 February 2026 06:23:16 +0000 (0:00:01.647) 0:29:55.076 ******* 2026-02-15 06:24:01.246559 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:24:01.246570 | orchestrator | 2026-02-15 06:24:01.246581 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-15 06:24:01.246591 | orchestrator | Sunday 15 February 2026 06:23:17 +0000 (0:00:00.801) 0:29:55.877 ******* 2026-02-15 06:24:01.246602 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:24:01.246613 | orchestrator | 2026-02-15 06:24:01.246623 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-15 06:24:01.246634 | orchestrator | Sunday 15 February 2026 06:23:18 +0000 (0:00:00.790) 0:29:56.668 ******* 2026-02-15 06:24:01.246645 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-1 2026-02-15 06:24:01.246656 | orchestrator | 2026-02-15 06:24:01.246667 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-15 06:24:01.246678 | orchestrator | Sunday 15 February 2026 06:23:19 +0000 (0:00:01.189) 0:29:57.857 ******* 2026-02-15 06:24:01.246688 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:24:01.246699 | orchestrator | 2026-02-15 06:24:01.246709 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-15 06:24:01.246720 | orchestrator | Sunday 15 February 2026 06:23:20 +0000 (0:00:01.155) 0:29:59.013 ******* 2026-02-15 06:24:01.246731 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:24:01.246741 | orchestrator | 2026-02-15 06:24:01.246753 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-15 06:24:01.246764 | orchestrator | Sunday 15 February 2026 06:23:22 +0000 (0:00:01.168) 0:30:00.181 ******* 2026-02-15 06:24:01.246774 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-1 2026-02-15 06:24:01.246785 | orchestrator | 2026-02-15 06:24:01.246796 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-15 06:24:01.246807 | orchestrator | Sunday 15 February 2026 06:23:23 +0000 (0:00:01.175) 0:30:01.357 ******* 2026-02-15 06:24:01.246817 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:24:01.246842 | orchestrator | 2026-02-15 06:24:01.246860 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-15 06:24:01.246878 | orchestrator | Sunday 15 February 2026 06:23:25 +0000 (0:00:02.076) 0:30:03.434 ******* 2026-02-15 06:24:01.246897 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:24:01.246915 | orchestrator | 2026-02-15 06:24:01.246962 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-15 06:24:01.246975 | orchestrator | Sunday 15 February 2026 06:23:27 +0000 (0:00:01.974) 0:30:05.408 ******* 2026-02-15 06:24:01.246988 | orchestrator | ok: [testbed-node-1] 2026-02-15 06:24:01.247001 | orchestrator | 2026-02-15 06:24:01.247013 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-15 06:24:01.247025 | orchestrator | Sunday 15 February 2026 06:23:29 +0000 (0:00:02.555) 0:30:07.964 ******* 2026-02-15 06:24:01.247037 | orchestrator | changed: [testbed-node-1] 2026-02-15 06:24:01.247050 | orchestrator | 2026-02-15 06:24:01.247062 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-15 06:24:01.247075 | orchestrator | Sunday 15 February 2026 06:23:33 +0000 (0:00:03.576) 0:30:11.540 ******* 2026-02-15 06:24:01.247088 | orchestrator | skipping: [testbed-node-1] 2026-02-15 06:24:01.247100 | orchestrator | 2026-02-15 06:24:01.247112 | orchestrator | PLAY [Upgrade ceph mgr nodes] ************************************************** 2026-02-15 06:24:01.247124 | orchestrator | 2026-02-15 06:24:01.247136 | orchestrator | TASK [Stop ceph mgr] *********************************************************** 2026-02-15 06:24:01.247163 | orchestrator | Sunday 15 February 2026 06:23:34 +0000 (0:00:01.027) 0:30:12.567 ******* 2026-02-15 06:24:01.247176 | orchestrator | changed: [testbed-node-2] 2026-02-15 06:24:01.247190 | orchestrator | 2026-02-15 06:24:01.247203 | orchestrator | TASK [Mask ceph mgr systemd unit] ********************************************** 2026-02-15 06:24:01.247213 | orchestrator | Sunday 15 February 2026 06:23:37 +0000 (0:00:02.548) 0:30:15.115 ******* 2026-02-15 06:24:01.247225 | orchestrator | changed: [testbed-node-2] 2026-02-15 06:24:01.247236 | orchestrator | 2026-02-15 06:24:01.247246 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:24:01.247257 | orchestrator | Sunday 15 February 2026 06:23:39 +0000 (0:00:02.120) 0:30:17.236 ******* 2026-02-15 06:24:01.247268 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-2 2026-02-15 06:24:01.247278 | orchestrator | 2026-02-15 06:24:01.247289 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:24:01.247300 | orchestrator | Sunday 15 February 2026 06:23:40 +0000 (0:00:01.149) 0:30:18.385 ******* 2026-02-15 06:24:01.247310 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:01.247321 | orchestrator | 2026-02-15 06:24:01.247332 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:24:01.247342 | orchestrator | Sunday 15 February 2026 06:23:41 +0000 (0:00:01.602) 0:30:19.987 ******* 2026-02-15 06:24:01.247353 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:01.247364 | orchestrator | 2026-02-15 06:24:01.247374 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:24:01.247385 | orchestrator | Sunday 15 February 2026 06:23:43 +0000 (0:00:01.144) 0:30:21.131 ******* 2026-02-15 06:24:01.247396 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:01.247406 | orchestrator | 2026-02-15 06:24:01.247417 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:24:01.247446 | orchestrator | Sunday 15 February 2026 06:23:44 +0000 (0:00:01.493) 0:30:22.625 ******* 2026-02-15 06:24:01.247458 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:01.247468 | orchestrator | 2026-02-15 06:24:01.247479 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:24:01.247490 | orchestrator | Sunday 15 February 2026 06:23:45 +0000 (0:00:01.205) 0:30:23.831 ******* 2026-02-15 06:24:01.247501 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:01.247511 | orchestrator | 2026-02-15 06:24:01.247522 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:24:01.247532 | orchestrator | Sunday 15 February 2026 06:23:46 +0000 (0:00:01.200) 0:30:25.032 ******* 2026-02-15 06:24:01.247543 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:01.247554 | orchestrator | 2026-02-15 06:24:01.247565 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:24:01.247576 | orchestrator | Sunday 15 February 2026 06:23:48 +0000 (0:00:01.211) 0:30:26.244 ******* 2026-02-15 06:24:01.247587 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:01.247597 | orchestrator | 2026-02-15 06:24:01.247608 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:24:01.247619 | orchestrator | Sunday 15 February 2026 06:23:49 +0000 (0:00:01.223) 0:30:27.468 ******* 2026-02-15 06:24:01.247629 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:01.247640 | orchestrator | 2026-02-15 06:24:01.247651 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:24:01.247661 | orchestrator | Sunday 15 February 2026 06:23:50 +0000 (0:00:01.184) 0:30:28.653 ******* 2026-02-15 06:24:01.247672 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:24:01.247683 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:24:01.247693 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:24:01.247704 | orchestrator | 2026-02-15 06:24:01.247715 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:24:01.247733 | orchestrator | Sunday 15 February 2026 06:23:52 +0000 (0:00:01.821) 0:30:30.475 ******* 2026-02-15 06:24:01.247744 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:01.247754 | orchestrator | 2026-02-15 06:24:01.247765 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:24:01.247776 | orchestrator | Sunday 15 February 2026 06:23:53 +0000 (0:00:01.308) 0:30:31.784 ******* 2026-02-15 06:24:01.247786 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:24:01.247797 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:24:01.247808 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:24:01.247819 | orchestrator | 2026-02-15 06:24:01.247835 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:24:01.247846 | orchestrator | Sunday 15 February 2026 06:23:56 +0000 (0:00:02.926) 0:30:34.710 ******* 2026-02-15 06:24:01.247857 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 06:24:01.247868 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 06:24:01.247879 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 06:24:01.247890 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:01.247900 | orchestrator | 2026-02-15 06:24:01.247911 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:24:01.247952 | orchestrator | Sunday 15 February 2026 06:23:58 +0000 (0:00:01.443) 0:30:36.153 ******* 2026-02-15 06:24:01.247967 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:24:01.247982 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:24:01.247993 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:24:01.248004 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:01.248015 | orchestrator | 2026-02-15 06:24:01.248026 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:24:01.248036 | orchestrator | Sunday 15 February 2026 06:23:59 +0000 (0:00:01.948) 0:30:38.101 ******* 2026-02-15 06:24:01.248049 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:01.248071 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:21.341112 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:21.341261 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341280 | orchestrator | 2026-02-15 06:24:21.341292 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:24:21.341304 | orchestrator | Sunday 15 February 2026 06:24:01 +0000 (0:00:01.236) 0:30:39.338 ******* 2026-02-15 06:24:21.341317 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:23:54.213019', 'end': '2026-02-15 06:23:54.265869', 'delta': '0:00:00.052850', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:24:21.341345 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:23:54.785911', 'end': '2026-02-15 06:23:54.840281', 'delta': '0:00:00.054370', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:24:21.341356 | orchestrator | ok: [testbed-node-2] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:23:55.348335', 'end': '2026-02-15 06:23:55.399543', 'delta': '0:00:00.051208', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:24:21.341366 | orchestrator | 2026-02-15 06:24:21.341377 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:24:21.341388 | orchestrator | Sunday 15 February 2026 06:24:02 +0000 (0:00:01.195) 0:30:40.534 ******* 2026-02-15 06:24:21.341397 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:21.341410 | orchestrator | 2026-02-15 06:24:21.341420 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:24:21.341430 | orchestrator | Sunday 15 February 2026 06:24:03 +0000 (0:00:01.278) 0:30:41.812 ******* 2026-02-15 06:24:21.341440 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341451 | orchestrator | 2026-02-15 06:24:21.341460 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:24:21.341469 | orchestrator | Sunday 15 February 2026 06:24:05 +0000 (0:00:01.299) 0:30:43.112 ******* 2026-02-15 06:24:21.341479 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:21.341489 | orchestrator | 2026-02-15 06:24:21.341499 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:24:21.341510 | orchestrator | Sunday 15 February 2026 06:24:06 +0000 (0:00:01.261) 0:30:44.374 ******* 2026-02-15 06:24:21.341520 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:24:21.341529 | orchestrator | 2026-02-15 06:24:21.341538 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:24:21.341556 | orchestrator | Sunday 15 February 2026 06:24:08 +0000 (0:00:02.002) 0:30:46.377 ******* 2026-02-15 06:24:21.341565 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:21.341574 | orchestrator | 2026-02-15 06:24:21.341583 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:24:21.341592 | orchestrator | Sunday 15 February 2026 06:24:09 +0000 (0:00:01.161) 0:30:47.539 ******* 2026-02-15 06:24:21.341618 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341628 | orchestrator | 2026-02-15 06:24:21.341637 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:24:21.341647 | orchestrator | Sunday 15 February 2026 06:24:10 +0000 (0:00:01.165) 0:30:48.704 ******* 2026-02-15 06:24:21.341657 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341667 | orchestrator | 2026-02-15 06:24:21.341676 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:24:21.341684 | orchestrator | Sunday 15 February 2026 06:24:11 +0000 (0:00:01.221) 0:30:49.925 ******* 2026-02-15 06:24:21.341694 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341704 | orchestrator | 2026-02-15 06:24:21.341714 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:24:21.341725 | orchestrator | Sunday 15 February 2026 06:24:12 +0000 (0:00:01.159) 0:30:51.085 ******* 2026-02-15 06:24:21.341735 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341745 | orchestrator | 2026-02-15 06:24:21.341757 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:24:21.341767 | orchestrator | Sunday 15 February 2026 06:24:14 +0000 (0:00:01.225) 0:30:52.311 ******* 2026-02-15 06:24:21.341778 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341787 | orchestrator | 2026-02-15 06:24:21.341796 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:24:21.341805 | orchestrator | Sunday 15 February 2026 06:24:15 +0000 (0:00:01.203) 0:30:53.514 ******* 2026-02-15 06:24:21.341814 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341823 | orchestrator | 2026-02-15 06:24:21.341832 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:24:21.341840 | orchestrator | Sunday 15 February 2026 06:24:16 +0000 (0:00:01.152) 0:30:54.667 ******* 2026-02-15 06:24:21.341849 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341858 | orchestrator | 2026-02-15 06:24:21.341867 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:24:21.341877 | orchestrator | Sunday 15 February 2026 06:24:17 +0000 (0:00:01.116) 0:30:55.783 ******* 2026-02-15 06:24:21.341886 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341894 | orchestrator | 2026-02-15 06:24:21.341944 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:24:21.341954 | orchestrator | Sunday 15 February 2026 06:24:18 +0000 (0:00:01.178) 0:30:56.962 ******* 2026-02-15 06:24:21.341963 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:21.341973 | orchestrator | 2026-02-15 06:24:21.341982 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:24:21.341999 | orchestrator | Sunday 15 February 2026 06:24:20 +0000 (0:00:01.152) 0:30:58.114 ******* 2026-02-15 06:24:21.342009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:24:21.342070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:24:21.342089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:24:21.342101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:24:21.342113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:24:21.342133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:24:22.705252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:24:22.705377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1976e1cf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:24:22.705421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:24:22.705435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:24:22.705448 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:22.705461 | orchestrator | 2026-02-15 06:24:22.705473 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:24:22.705485 | orchestrator | Sunday 15 February 2026 06:24:21 +0000 (0:00:01.317) 0:30:59.432 ******* 2026-02-15 06:24:22.705517 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:22.705531 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:22.705543 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:22.705560 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-36-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:22.705580 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:22.705591 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:22.705602 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:22.705630 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1976e1cf', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1', 'scsi-SQEMU_QEMU_HARDDISK_1976e1cf-6346-4412-9b3b-15c43c691264-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:58.258583 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:58.258701 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:24:58.258720 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.258735 | orchestrator | 2026-02-15 06:24:58.258747 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:24:58.258760 | orchestrator | Sunday 15 February 2026 06:24:22 +0000 (0:00:01.367) 0:31:00.800 ******* 2026-02-15 06:24:58.258771 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:58.258783 | orchestrator | 2026-02-15 06:24:58.258794 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:24:58.258805 | orchestrator | Sunday 15 February 2026 06:24:24 +0000 (0:00:01.502) 0:31:02.302 ******* 2026-02-15 06:24:58.258816 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:58.258827 | orchestrator | 2026-02-15 06:24:58.258838 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:24:58.258849 | orchestrator | Sunday 15 February 2026 06:24:25 +0000 (0:00:01.159) 0:31:03.462 ******* 2026-02-15 06:24:58.258911 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:58.258925 | orchestrator | 2026-02-15 06:24:58.258936 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:24:58.258947 | orchestrator | Sunday 15 February 2026 06:24:26 +0000 (0:00:01.545) 0:31:05.008 ******* 2026-02-15 06:24:58.258959 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.258970 | orchestrator | 2026-02-15 06:24:58.258980 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:24:58.258992 | orchestrator | Sunday 15 February 2026 06:24:28 +0000 (0:00:01.223) 0:31:06.231 ******* 2026-02-15 06:24:58.259003 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.259014 | orchestrator | 2026-02-15 06:24:58.259025 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:24:58.259036 | orchestrator | Sunday 15 February 2026 06:24:29 +0000 (0:00:01.251) 0:31:07.483 ******* 2026-02-15 06:24:58.259047 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.259058 | orchestrator | 2026-02-15 06:24:58.259069 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:24:58.259081 | orchestrator | Sunday 15 February 2026 06:24:30 +0000 (0:00:01.197) 0:31:08.680 ******* 2026-02-15 06:24:58.259117 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-02-15 06:24:58.259131 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-02-15 06:24:58.259143 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:24:58.259155 | orchestrator | 2026-02-15 06:24:58.259168 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:24:58.259181 | orchestrator | Sunday 15 February 2026 06:24:32 +0000 (0:00:01.741) 0:31:10.422 ******* 2026-02-15 06:24:58.259194 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-02-15 06:24:58.259207 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-02-15 06:24:58.259219 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-02-15 06:24:58.259232 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.259244 | orchestrator | 2026-02-15 06:24:58.259270 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:24:58.259283 | orchestrator | Sunday 15 February 2026 06:24:33 +0000 (0:00:01.204) 0:31:11.626 ******* 2026-02-15 06:24:58.259295 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.259308 | orchestrator | 2026-02-15 06:24:58.259320 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:24:58.259333 | orchestrator | Sunday 15 February 2026 06:24:34 +0000 (0:00:01.143) 0:31:12.770 ******* 2026-02-15 06:24:58.259345 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:24:58.259358 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:24:58.259370 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:24:58.259383 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:24:58.259395 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:24:58.259408 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:24:58.259438 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:24:58.259452 | orchestrator | 2026-02-15 06:24:58.259464 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:24:58.259477 | orchestrator | Sunday 15 February 2026 06:24:36 +0000 (0:00:02.247) 0:31:15.017 ******* 2026-02-15 06:24:58.259488 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:24:58.259498 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:24:58.259510 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:24:58.259521 | orchestrator | ok: [testbed-node-2 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:24:58.259531 | orchestrator | ok: [testbed-node-2 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:24:58.259542 | orchestrator | ok: [testbed-node-2 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:24:58.259553 | orchestrator | ok: [testbed-node-2 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:24:58.259563 | orchestrator | 2026-02-15 06:24:58.259574 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:24:58.259584 | orchestrator | Sunday 15 February 2026 06:24:39 +0000 (0:00:02.343) 0:31:17.361 ******* 2026-02-15 06:24:58.259595 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-2 2026-02-15 06:24:58.259607 | orchestrator | 2026-02-15 06:24:58.259617 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:24:58.259628 | orchestrator | Sunday 15 February 2026 06:24:40 +0000 (0:00:01.288) 0:31:18.650 ******* 2026-02-15 06:24:58.259655 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-2 2026-02-15 06:24:58.259686 | orchestrator | 2026-02-15 06:24:58.259697 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:24:58.259708 | orchestrator | Sunday 15 February 2026 06:24:41 +0000 (0:00:01.203) 0:31:19.854 ******* 2026-02-15 06:24:58.259719 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:58.259730 | orchestrator | 2026-02-15 06:24:58.259740 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:24:58.259751 | orchestrator | Sunday 15 February 2026 06:24:43 +0000 (0:00:01.592) 0:31:21.446 ******* 2026-02-15 06:24:58.259762 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.259772 | orchestrator | 2026-02-15 06:24:58.259783 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:24:58.259794 | orchestrator | Sunday 15 February 2026 06:24:44 +0000 (0:00:01.163) 0:31:22.609 ******* 2026-02-15 06:24:58.259804 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.259815 | orchestrator | 2026-02-15 06:24:58.259826 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:24:58.259836 | orchestrator | Sunday 15 February 2026 06:24:45 +0000 (0:00:01.172) 0:31:23.782 ******* 2026-02-15 06:24:58.259847 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.259857 | orchestrator | 2026-02-15 06:24:58.259893 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:24:58.259904 | orchestrator | Sunday 15 February 2026 06:24:46 +0000 (0:00:01.157) 0:31:24.940 ******* 2026-02-15 06:24:58.259915 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:58.259926 | orchestrator | 2026-02-15 06:24:58.259937 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:24:58.259948 | orchestrator | Sunday 15 February 2026 06:24:48 +0000 (0:00:01.630) 0:31:26.570 ******* 2026-02-15 06:24:58.259959 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.259969 | orchestrator | 2026-02-15 06:24:58.259980 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:24:58.260055 | orchestrator | Sunday 15 February 2026 06:24:49 +0000 (0:00:01.160) 0:31:27.731 ******* 2026-02-15 06:24:58.260068 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.260079 | orchestrator | 2026-02-15 06:24:58.260089 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:24:58.260100 | orchestrator | Sunday 15 February 2026 06:24:50 +0000 (0:00:01.203) 0:31:28.935 ******* 2026-02-15 06:24:58.260110 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:58.260121 | orchestrator | 2026-02-15 06:24:58.260132 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:24:58.260142 | orchestrator | Sunday 15 February 2026 06:24:52 +0000 (0:00:01.619) 0:31:30.554 ******* 2026-02-15 06:24:58.260153 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:58.260163 | orchestrator | 2026-02-15 06:24:58.260178 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:24:58.260207 | orchestrator | Sunday 15 February 2026 06:24:54 +0000 (0:00:01.652) 0:31:32.207 ******* 2026-02-15 06:24:58.260227 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.260246 | orchestrator | 2026-02-15 06:24:58.260264 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:24:58.260282 | orchestrator | Sunday 15 February 2026 06:24:55 +0000 (0:00:00.900) 0:31:33.107 ******* 2026-02-15 06:24:58.260300 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:24:58.260318 | orchestrator | 2026-02-15 06:24:58.260336 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:24:58.260356 | orchestrator | Sunday 15 February 2026 06:24:55 +0000 (0:00:00.816) 0:31:33.923 ******* 2026-02-15 06:24:58.260376 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.260422 | orchestrator | 2026-02-15 06:24:58.260445 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:24:58.260463 | orchestrator | Sunday 15 February 2026 06:24:56 +0000 (0:00:00.807) 0:31:34.731 ******* 2026-02-15 06:24:58.260481 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:24:58.260514 | orchestrator | 2026-02-15 06:24:58.260532 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:24:58.260551 | orchestrator | Sunday 15 February 2026 06:24:57 +0000 (0:00:00.841) 0:31:35.572 ******* 2026-02-15 06:24:58.260578 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.303391 | orchestrator | 2026-02-15 06:25:39.303576 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:25:39.303638 | orchestrator | Sunday 15 February 2026 06:24:58 +0000 (0:00:00.777) 0:31:36.349 ******* 2026-02-15 06:25:39.303652 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.303665 | orchestrator | 2026-02-15 06:25:39.303677 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:25:39.303688 | orchestrator | Sunday 15 February 2026 06:24:59 +0000 (0:00:00.833) 0:31:37.183 ******* 2026-02-15 06:25:39.303699 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.303710 | orchestrator | 2026-02-15 06:25:39.303721 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:25:39.303732 | orchestrator | Sunday 15 February 2026 06:24:59 +0000 (0:00:00.826) 0:31:38.009 ******* 2026-02-15 06:25:39.303743 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:25:39.303754 | orchestrator | 2026-02-15 06:25:39.303765 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:25:39.303776 | orchestrator | Sunday 15 February 2026 06:25:00 +0000 (0:00:00.812) 0:31:38.822 ******* 2026-02-15 06:25:39.303787 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:25:39.303798 | orchestrator | 2026-02-15 06:25:39.303809 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:25:39.303852 | orchestrator | Sunday 15 February 2026 06:25:01 +0000 (0:00:00.858) 0:31:39.680 ******* 2026-02-15 06:25:39.303872 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:25:39.303893 | orchestrator | 2026-02-15 06:25:39.303911 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:25:39.303931 | orchestrator | Sunday 15 February 2026 06:25:02 +0000 (0:00:00.861) 0:31:40.542 ******* 2026-02-15 06:25:39.303950 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.303966 | orchestrator | 2026-02-15 06:25:39.303982 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:25:39.303999 | orchestrator | Sunday 15 February 2026 06:25:03 +0000 (0:00:00.836) 0:31:41.378 ******* 2026-02-15 06:25:39.304016 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304031 | orchestrator | 2026-02-15 06:25:39.304047 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:25:39.304063 | orchestrator | Sunday 15 February 2026 06:25:04 +0000 (0:00:00.832) 0:31:42.211 ******* 2026-02-15 06:25:39.304079 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304095 | orchestrator | 2026-02-15 06:25:39.304112 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:25:39.304130 | orchestrator | Sunday 15 February 2026 06:25:04 +0000 (0:00:00.861) 0:31:43.073 ******* 2026-02-15 06:25:39.304146 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304164 | orchestrator | 2026-02-15 06:25:39.304182 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:25:39.304199 | orchestrator | Sunday 15 February 2026 06:25:05 +0000 (0:00:00.785) 0:31:43.858 ******* 2026-02-15 06:25:39.304218 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304236 | orchestrator | 2026-02-15 06:25:39.304253 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:25:39.304271 | orchestrator | Sunday 15 February 2026 06:25:06 +0000 (0:00:00.748) 0:31:44.607 ******* 2026-02-15 06:25:39.304288 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304308 | orchestrator | 2026-02-15 06:25:39.304327 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:25:39.304346 | orchestrator | Sunday 15 February 2026 06:25:07 +0000 (0:00:00.787) 0:31:45.395 ******* 2026-02-15 06:25:39.304359 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304400 | orchestrator | 2026-02-15 06:25:39.304411 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:25:39.304423 | orchestrator | Sunday 15 February 2026 06:25:08 +0000 (0:00:00.772) 0:31:46.167 ******* 2026-02-15 06:25:39.304434 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304445 | orchestrator | 2026-02-15 06:25:39.304455 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:25:39.304466 | orchestrator | Sunday 15 February 2026 06:25:08 +0000 (0:00:00.757) 0:31:46.924 ******* 2026-02-15 06:25:39.304481 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304498 | orchestrator | 2026-02-15 06:25:39.304516 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:25:39.304533 | orchestrator | Sunday 15 February 2026 06:25:09 +0000 (0:00:00.790) 0:31:47.715 ******* 2026-02-15 06:25:39.304551 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304565 | orchestrator | 2026-02-15 06:25:39.304576 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:25:39.304587 | orchestrator | Sunday 15 February 2026 06:25:10 +0000 (0:00:00.766) 0:31:48.482 ******* 2026-02-15 06:25:39.304613 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304624 | orchestrator | 2026-02-15 06:25:39.304635 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:25:39.304645 | orchestrator | Sunday 15 February 2026 06:25:11 +0000 (0:00:00.766) 0:31:49.249 ******* 2026-02-15 06:25:39.304656 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304667 | orchestrator | 2026-02-15 06:25:39.304678 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:25:39.304688 | orchestrator | Sunday 15 February 2026 06:25:11 +0000 (0:00:00.772) 0:31:50.021 ******* 2026-02-15 06:25:39.304699 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:25:39.304710 | orchestrator | 2026-02-15 06:25:39.304720 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:25:39.304731 | orchestrator | Sunday 15 February 2026 06:25:13 +0000 (0:00:01.595) 0:31:51.617 ******* 2026-02-15 06:25:39.304742 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:25:39.304752 | orchestrator | 2026-02-15 06:25:39.304763 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:25:39.304774 | orchestrator | Sunday 15 February 2026 06:25:15 +0000 (0:00:02.162) 0:31:53.780 ******* 2026-02-15 06:25:39.304785 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-2 2026-02-15 06:25:39.304797 | orchestrator | 2026-02-15 06:25:39.304860 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:25:39.304873 | orchestrator | Sunday 15 February 2026 06:25:16 +0000 (0:00:01.263) 0:31:55.044 ******* 2026-02-15 06:25:39.304884 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304895 | orchestrator | 2026-02-15 06:25:39.304906 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:25:39.304917 | orchestrator | Sunday 15 February 2026 06:25:18 +0000 (0:00:01.161) 0:31:56.205 ******* 2026-02-15 06:25:39.304927 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.304938 | orchestrator | 2026-02-15 06:25:39.304949 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:25:39.304960 | orchestrator | Sunday 15 February 2026 06:25:19 +0000 (0:00:01.268) 0:31:57.473 ******* 2026-02-15 06:25:39.304970 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:25:39.304981 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:25:39.304992 | orchestrator | 2026-02-15 06:25:39.305002 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:25:39.305013 | orchestrator | Sunday 15 February 2026 06:25:21 +0000 (0:00:01.834) 0:31:59.307 ******* 2026-02-15 06:25:39.305024 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:25:39.305035 | orchestrator | 2026-02-15 06:25:39.305046 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:25:39.305066 | orchestrator | Sunday 15 February 2026 06:25:22 +0000 (0:00:01.463) 0:32:00.771 ******* 2026-02-15 06:25:39.305078 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.305088 | orchestrator | 2026-02-15 06:25:39.305099 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:25:39.305110 | orchestrator | Sunday 15 February 2026 06:25:23 +0000 (0:00:01.120) 0:32:01.892 ******* 2026-02-15 06:25:39.305121 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.305132 | orchestrator | 2026-02-15 06:25:39.305142 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:25:39.305153 | orchestrator | Sunday 15 February 2026 06:25:24 +0000 (0:00:00.771) 0:32:02.663 ******* 2026-02-15 06:25:39.305164 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.305174 | orchestrator | 2026-02-15 06:25:39.305193 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:25:39.305211 | orchestrator | Sunday 15 February 2026 06:25:25 +0000 (0:00:00.792) 0:32:03.456 ******* 2026-02-15 06:25:39.305228 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-2 2026-02-15 06:25:39.305246 | orchestrator | 2026-02-15 06:25:39.305265 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:25:39.305282 | orchestrator | Sunday 15 February 2026 06:25:26 +0000 (0:00:01.121) 0:32:04.578 ******* 2026-02-15 06:25:39.305300 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:25:39.305318 | orchestrator | 2026-02-15 06:25:39.305333 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:25:39.305349 | orchestrator | Sunday 15 February 2026 06:25:28 +0000 (0:00:01.870) 0:32:06.448 ******* 2026-02-15 06:25:39.305365 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:25:39.305382 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:25:39.305400 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:25:39.305418 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.305436 | orchestrator | 2026-02-15 06:25:39.305453 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:25:39.305469 | orchestrator | Sunday 15 February 2026 06:25:29 +0000 (0:00:01.202) 0:32:07.651 ******* 2026-02-15 06:25:39.305487 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.305506 | orchestrator | 2026-02-15 06:25:39.305524 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:25:39.305541 | orchestrator | Sunday 15 February 2026 06:25:30 +0000 (0:00:01.189) 0:32:08.840 ******* 2026-02-15 06:25:39.305559 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.305577 | orchestrator | 2026-02-15 06:25:39.305594 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:25:39.305614 | orchestrator | Sunday 15 February 2026 06:25:31 +0000 (0:00:01.203) 0:32:10.044 ******* 2026-02-15 06:25:39.305633 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.305651 | orchestrator | 2026-02-15 06:25:39.305670 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:25:39.305682 | orchestrator | Sunday 15 February 2026 06:25:33 +0000 (0:00:01.213) 0:32:11.257 ******* 2026-02-15 06:25:39.305702 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.305713 | orchestrator | 2026-02-15 06:25:39.305724 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:25:39.305734 | orchestrator | Sunday 15 February 2026 06:25:34 +0000 (0:00:01.180) 0:32:12.438 ******* 2026-02-15 06:25:39.305745 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:25:39.305755 | orchestrator | 2026-02-15 06:25:39.305766 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:25:39.305776 | orchestrator | Sunday 15 February 2026 06:25:35 +0000 (0:00:00.788) 0:32:13.226 ******* 2026-02-15 06:25:39.305787 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:25:39.305808 | orchestrator | 2026-02-15 06:25:39.305874 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:25:39.305886 | orchestrator | Sunday 15 February 2026 06:25:37 +0000 (0:00:02.180) 0:32:15.407 ******* 2026-02-15 06:25:39.305897 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:25:39.305908 | orchestrator | 2026-02-15 06:25:39.305919 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:25:39.305929 | orchestrator | Sunday 15 February 2026 06:25:38 +0000 (0:00:00.846) 0:32:16.253 ******* 2026-02-15 06:25:39.305940 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-2 2026-02-15 06:25:39.305951 | orchestrator | 2026-02-15 06:25:39.305976 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:26:16.443779 | orchestrator | Sunday 15 February 2026 06:25:39 +0000 (0:00:01.140) 0:32:17.394 ******* 2026-02-15 06:26:16.443939 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.443958 | orchestrator | 2026-02-15 06:26:16.443971 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:26:16.443983 | orchestrator | Sunday 15 February 2026 06:25:40 +0000 (0:00:01.161) 0:32:18.556 ******* 2026-02-15 06:26:16.443994 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444005 | orchestrator | 2026-02-15 06:26:16.444016 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:26:16.444027 | orchestrator | Sunday 15 February 2026 06:25:41 +0000 (0:00:01.142) 0:32:19.698 ******* 2026-02-15 06:26:16.444038 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444049 | orchestrator | 2026-02-15 06:26:16.444060 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:26:16.444071 | orchestrator | Sunday 15 February 2026 06:25:42 +0000 (0:00:01.147) 0:32:20.846 ******* 2026-02-15 06:26:16.444082 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444093 | orchestrator | 2026-02-15 06:26:16.444104 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:26:16.444115 | orchestrator | Sunday 15 February 2026 06:25:43 +0000 (0:00:01.172) 0:32:22.018 ******* 2026-02-15 06:26:16.444125 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444136 | orchestrator | 2026-02-15 06:26:16.444147 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:26:16.444158 | orchestrator | Sunday 15 February 2026 06:25:45 +0000 (0:00:01.150) 0:32:23.169 ******* 2026-02-15 06:26:16.444169 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444179 | orchestrator | 2026-02-15 06:26:16.444190 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:26:16.444201 | orchestrator | Sunday 15 February 2026 06:25:46 +0000 (0:00:01.146) 0:32:24.316 ******* 2026-02-15 06:26:16.444212 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444223 | orchestrator | 2026-02-15 06:26:16.444234 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:26:16.444245 | orchestrator | Sunday 15 February 2026 06:25:47 +0000 (0:00:01.232) 0:32:25.548 ******* 2026-02-15 06:26:16.444256 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444266 | orchestrator | 2026-02-15 06:26:16.444277 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:26:16.444288 | orchestrator | Sunday 15 February 2026 06:25:48 +0000 (0:00:01.144) 0:32:26.692 ******* 2026-02-15 06:26:16.444299 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:26:16.444310 | orchestrator | 2026-02-15 06:26:16.444322 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:26:16.444335 | orchestrator | Sunday 15 February 2026 06:25:49 +0000 (0:00:00.842) 0:32:27.535 ******* 2026-02-15 06:26:16.444348 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-2 2026-02-15 06:26:16.444362 | orchestrator | 2026-02-15 06:26:16.444375 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:26:16.444411 | orchestrator | Sunday 15 February 2026 06:25:50 +0000 (0:00:01.148) 0:32:28.683 ******* 2026-02-15 06:26:16.444424 | orchestrator | ok: [testbed-node-2] => (item=/etc/ceph) 2026-02-15 06:26:16.444437 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/) 2026-02-15 06:26:16.444449 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-02-15 06:26:16.444461 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-02-15 06:26:16.444473 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-02-15 06:26:16.444485 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-02-15 06:26:16.444498 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-02-15 06:26:16.444510 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:26:16.444523 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:26:16.444535 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:26:16.444547 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:26:16.444560 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:26:16.444572 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:26:16.444585 | orchestrator | ok: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:26:16.444597 | orchestrator | ok: [testbed-node-2] => (item=/var/run/ceph) 2026-02-15 06:26:16.444625 | orchestrator | ok: [testbed-node-2] => (item=/var/log/ceph) 2026-02-15 06:26:16.444639 | orchestrator | 2026-02-15 06:26:16.444651 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:26:16.444664 | orchestrator | Sunday 15 February 2026 06:25:57 +0000 (0:00:06.434) 0:32:35.118 ******* 2026-02-15 06:26:16.444676 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444687 | orchestrator | 2026-02-15 06:26:16.444698 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:26:16.444708 | orchestrator | Sunday 15 February 2026 06:25:57 +0000 (0:00:00.806) 0:32:35.924 ******* 2026-02-15 06:26:16.444719 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444730 | orchestrator | 2026-02-15 06:26:16.444741 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:26:16.444751 | orchestrator | Sunday 15 February 2026 06:25:58 +0000 (0:00:00.795) 0:32:36.720 ******* 2026-02-15 06:26:16.444762 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444773 | orchestrator | 2026-02-15 06:26:16.444805 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:26:16.444817 | orchestrator | Sunday 15 February 2026 06:25:59 +0000 (0:00:00.769) 0:32:37.490 ******* 2026-02-15 06:26:16.444827 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444838 | orchestrator | 2026-02-15 06:26:16.444849 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:26:16.444876 | orchestrator | Sunday 15 February 2026 06:26:00 +0000 (0:00:00.805) 0:32:38.295 ******* 2026-02-15 06:26:16.444888 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444898 | orchestrator | 2026-02-15 06:26:16.444909 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:26:16.444920 | orchestrator | Sunday 15 February 2026 06:26:00 +0000 (0:00:00.786) 0:32:39.081 ******* 2026-02-15 06:26:16.444930 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444941 | orchestrator | 2026-02-15 06:26:16.444951 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:26:16.444962 | orchestrator | Sunday 15 February 2026 06:26:01 +0000 (0:00:00.822) 0:32:39.904 ******* 2026-02-15 06:26:16.444972 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.444983 | orchestrator | 2026-02-15 06:26:16.444994 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:26:16.445004 | orchestrator | Sunday 15 February 2026 06:26:02 +0000 (0:00:00.817) 0:32:40.721 ******* 2026-02-15 06:26:16.445024 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445034 | orchestrator | 2026-02-15 06:26:16.445045 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:26:16.445056 | orchestrator | Sunday 15 February 2026 06:26:03 +0000 (0:00:00.826) 0:32:41.548 ******* 2026-02-15 06:26:16.445066 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445077 | orchestrator | 2026-02-15 06:26:16.445087 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:26:16.445098 | orchestrator | Sunday 15 February 2026 06:26:04 +0000 (0:00:00.789) 0:32:42.337 ******* 2026-02-15 06:26:16.445108 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445119 | orchestrator | 2026-02-15 06:26:16.445135 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:26:16.445152 | orchestrator | Sunday 15 February 2026 06:26:05 +0000 (0:00:00.835) 0:32:43.173 ******* 2026-02-15 06:26:16.445170 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445188 | orchestrator | 2026-02-15 06:26:16.445206 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:26:16.445223 | orchestrator | Sunday 15 February 2026 06:26:05 +0000 (0:00:00.834) 0:32:44.007 ******* 2026-02-15 06:26:16.445240 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445257 | orchestrator | 2026-02-15 06:26:16.445275 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:26:16.445290 | orchestrator | Sunday 15 February 2026 06:26:06 +0000 (0:00:00.778) 0:32:44.785 ******* 2026-02-15 06:26:16.445305 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445323 | orchestrator | 2026-02-15 06:26:16.445341 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:26:16.445359 | orchestrator | Sunday 15 February 2026 06:26:07 +0000 (0:00:00.873) 0:32:45.659 ******* 2026-02-15 06:26:16.445376 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445394 | orchestrator | 2026-02-15 06:26:16.445413 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:26:16.445431 | orchestrator | Sunday 15 February 2026 06:26:08 +0000 (0:00:00.866) 0:32:46.525 ******* 2026-02-15 06:26:16.445450 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445465 | orchestrator | 2026-02-15 06:26:16.445476 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:26:16.445486 | orchestrator | Sunday 15 February 2026 06:26:09 +0000 (0:00:00.912) 0:32:47.438 ******* 2026-02-15 06:26:16.445497 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445508 | orchestrator | 2026-02-15 06:26:16.445518 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:26:16.445529 | orchestrator | Sunday 15 February 2026 06:26:10 +0000 (0:00:00.808) 0:32:48.246 ******* 2026-02-15 06:26:16.445539 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445550 | orchestrator | 2026-02-15 06:26:16.445561 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:26:16.445573 | orchestrator | Sunday 15 February 2026 06:26:10 +0000 (0:00:00.800) 0:32:49.047 ******* 2026-02-15 06:26:16.445584 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445594 | orchestrator | 2026-02-15 06:26:16.445604 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:26:16.445615 | orchestrator | Sunday 15 February 2026 06:26:11 +0000 (0:00:00.802) 0:32:49.849 ******* 2026-02-15 06:26:16.445625 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445636 | orchestrator | 2026-02-15 06:26:16.445646 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:26:16.445665 | orchestrator | Sunday 15 February 2026 06:26:12 +0000 (0:00:00.783) 0:32:50.633 ******* 2026-02-15 06:26:16.445676 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445686 | orchestrator | 2026-02-15 06:26:16.445697 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:26:16.445716 | orchestrator | Sunday 15 February 2026 06:26:13 +0000 (0:00:00.878) 0:32:51.512 ******* 2026-02-15 06:26:16.445726 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445737 | orchestrator | 2026-02-15 06:26:16.445748 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:26:16.445758 | orchestrator | Sunday 15 February 2026 06:26:14 +0000 (0:00:00.772) 0:32:52.284 ******* 2026-02-15 06:26:16.445769 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-15 06:26:16.445780 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-15 06:26:16.445826 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-15 06:26:16.445838 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:26:16.445849 | orchestrator | 2026-02-15 06:26:16.445859 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:26:16.445870 | orchestrator | Sunday 15 February 2026 06:26:15 +0000 (0:00:01.193) 0:32:53.478 ******* 2026-02-15 06:26:16.445881 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-15 06:26:16.445902 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-15 06:27:13.624882 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-15 06:27:13.624995 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.625009 | orchestrator | 2026-02-15 06:27:13.625020 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:27:13.625032 | orchestrator | Sunday 15 February 2026 06:26:16 +0000 (0:00:01.060) 0:32:54.539 ******* 2026-02-15 06:27:13.625042 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-02-15 06:27:13.625052 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-02-15 06:27:13.625062 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-02-15 06:27:13.625071 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.625081 | orchestrator | 2026-02-15 06:27:13.625091 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:27:13.625100 | orchestrator | Sunday 15 February 2026 06:26:17 +0000 (0:00:01.085) 0:32:55.624 ******* 2026-02-15 06:27:13.625110 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.625120 | orchestrator | 2026-02-15 06:27:13.625129 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:27:13.625139 | orchestrator | Sunday 15 February 2026 06:26:18 +0000 (0:00:00.790) 0:32:56.415 ******* 2026-02-15 06:27:13.625149 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-02-15 06:27:13.625170 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.625190 | orchestrator | 2026-02-15 06:27:13.625200 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:27:13.625209 | orchestrator | Sunday 15 February 2026 06:26:19 +0000 (0:00:00.928) 0:32:57.344 ******* 2026-02-15 06:27:13.625219 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:27:13.625229 | orchestrator | 2026-02-15 06:27:13.625238 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:27:13.625248 | orchestrator | Sunday 15 February 2026 06:26:20 +0000 (0:00:01.435) 0:32:58.779 ******* 2026-02-15 06:27:13.625257 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:27:13.625268 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:27:13.625278 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-02-15 06:27:13.625287 | orchestrator | 2026-02-15 06:27:13.625297 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-02-15 06:27:13.625307 | orchestrator | Sunday 15 February 2026 06:26:22 +0000 (0:00:01.696) 0:33:00.476 ******* 2026-02-15 06:27:13.625316 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-2 2026-02-15 06:27:13.625325 | orchestrator | 2026-02-15 06:27:13.625335 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-02-15 06:27:13.625368 | orchestrator | Sunday 15 February 2026 06:26:23 +0000 (0:00:01.152) 0:33:01.628 ******* 2026-02-15 06:27:13.625381 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:27:13.625393 | orchestrator | 2026-02-15 06:27:13.625404 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-02-15 06:27:13.625415 | orchestrator | Sunday 15 February 2026 06:26:25 +0000 (0:00:01.531) 0:33:03.160 ******* 2026-02-15 06:27:13.625426 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.625437 | orchestrator | 2026-02-15 06:27:13.625449 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-02-15 06:27:13.625460 | orchestrator | Sunday 15 February 2026 06:26:26 +0000 (0:00:01.101) 0:33:04.261 ******* 2026-02-15 06:27:13.625471 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:27:13.625482 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:27:13.625493 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:27:13.625504 | orchestrator | ok: [testbed-node-2 -> {{ groups[mon_group_name][0] }}] 2026-02-15 06:27:13.625515 | orchestrator | 2026-02-15 06:27:13.625526 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-02-15 06:27:13.625537 | orchestrator | Sunday 15 February 2026 06:26:33 +0000 (0:00:07.201) 0:33:11.463 ******* 2026-02-15 06:27:13.625548 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:27:13.625559 | orchestrator | 2026-02-15 06:27:13.625570 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-02-15 06:27:13.625580 | orchestrator | Sunday 15 February 2026 06:26:34 +0000 (0:00:01.211) 0:33:12.675 ******* 2026-02-15 06:27:13.625592 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-15 06:27:13.625617 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-15 06:27:13.625628 | orchestrator | 2026-02-15 06:27:13.625640 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-02-15 06:27:13.625651 | orchestrator | Sunday 15 February 2026 06:26:37 +0000 (0:00:03.351) 0:33:16.027 ******* 2026-02-15 06:27:13.625661 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-02-15 06:27:13.625673 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-02-15 06:27:13.625684 | orchestrator | 2026-02-15 06:27:13.625695 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-02-15 06:27:13.625706 | orchestrator | Sunday 15 February 2026 06:26:40 +0000 (0:00:02.091) 0:33:18.118 ******* 2026-02-15 06:27:13.625717 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:27:13.625727 | orchestrator | 2026-02-15 06:27:13.625758 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-02-15 06:27:13.625770 | orchestrator | Sunday 15 February 2026 06:26:41 +0000 (0:00:01.508) 0:33:19.626 ******* 2026-02-15 06:27:13.625781 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.625791 | orchestrator | 2026-02-15 06:27:13.625800 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-02-15 06:27:13.625810 | orchestrator | Sunday 15 February 2026 06:26:42 +0000 (0:00:00.761) 0:33:20.388 ******* 2026-02-15 06:27:13.625820 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.625830 | orchestrator | 2026-02-15 06:27:13.625839 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-02-15 06:27:13.625864 | orchestrator | Sunday 15 February 2026 06:26:43 +0000 (0:00:00.787) 0:33:21.175 ******* 2026-02-15 06:27:13.625874 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-2 2026-02-15 06:27:13.625884 | orchestrator | 2026-02-15 06:27:13.625893 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-02-15 06:27:13.625903 | orchestrator | Sunday 15 February 2026 06:26:44 +0000 (0:00:01.154) 0:33:22.330 ******* 2026-02-15 06:27:13.625912 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.625921 | orchestrator | 2026-02-15 06:27:13.625931 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-02-15 06:27:13.625940 | orchestrator | Sunday 15 February 2026 06:26:45 +0000 (0:00:01.169) 0:33:23.499 ******* 2026-02-15 06:27:13.625957 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.625967 | orchestrator | 2026-02-15 06:27:13.625976 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-02-15 06:27:13.625985 | orchestrator | Sunday 15 February 2026 06:26:46 +0000 (0:00:01.165) 0:33:24.665 ******* 2026-02-15 06:27:13.625995 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-2 2026-02-15 06:27:13.626004 | orchestrator | 2026-02-15 06:27:13.626067 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-02-15 06:27:13.626080 | orchestrator | Sunday 15 February 2026 06:26:47 +0000 (0:00:01.258) 0:33:25.924 ******* 2026-02-15 06:27:13.626090 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:27:13.626100 | orchestrator | 2026-02-15 06:27:13.626109 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-02-15 06:27:13.626119 | orchestrator | Sunday 15 February 2026 06:26:49 +0000 (0:00:02.024) 0:33:27.949 ******* 2026-02-15 06:27:13.626128 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:27:13.626138 | orchestrator | 2026-02-15 06:27:13.626147 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-02-15 06:27:13.626157 | orchestrator | Sunday 15 February 2026 06:26:51 +0000 (0:00:01.907) 0:33:29.857 ******* 2026-02-15 06:27:13.626166 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:27:13.626176 | orchestrator | 2026-02-15 06:27:13.626185 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-02-15 06:27:13.626195 | orchestrator | Sunday 15 February 2026 06:26:54 +0000 (0:00:02.502) 0:33:32.360 ******* 2026-02-15 06:27:13.626204 | orchestrator | changed: [testbed-node-2] 2026-02-15 06:27:13.626214 | orchestrator | 2026-02-15 06:27:13.626223 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-02-15 06:27:13.626233 | orchestrator | Sunday 15 February 2026 06:26:57 +0000 (0:00:03.516) 0:33:35.876 ******* 2026-02-15 06:27:13.626242 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-02-15 06:27:13.626252 | orchestrator | 2026-02-15 06:27:13.626261 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-02-15 06:27:13.626270 | orchestrator | Sunday 15 February 2026 06:26:59 +0000 (0:00:01.514) 0:33:37.391 ******* 2026-02-15 06:27:13.626280 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:27:13.626289 | orchestrator | 2026-02-15 06:27:13.626299 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-02-15 06:27:13.626308 | orchestrator | Sunday 15 February 2026 06:27:01 +0000 (0:00:02.432) 0:33:39.824 ******* 2026-02-15 06:27:13.626318 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:27:13.626327 | orchestrator | 2026-02-15 06:27:13.626337 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-02-15 06:27:13.626346 | orchestrator | Sunday 15 February 2026 06:27:04 +0000 (0:00:02.355) 0:33:42.180 ******* 2026-02-15 06:27:13.626356 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:27:13.626365 | orchestrator | 2026-02-15 06:27:13.626374 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-02-15 06:27:13.626384 | orchestrator | Sunday 15 February 2026 06:27:05 +0000 (0:00:01.370) 0:33:43.551 ******* 2026-02-15 06:27:13.626393 | orchestrator | ok: [testbed-node-2] 2026-02-15 06:27:13.626403 | orchestrator | 2026-02-15 06:27:13.626412 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-02-15 06:27:13.626422 | orchestrator | Sunday 15 February 2026 06:27:06 +0000 (0:00:01.154) 0:33:44.706 ******* 2026-02-15 06:27:13.626431 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-15 06:27:13.626441 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-15 06:27:13.626450 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.626460 | orchestrator | 2026-02-15 06:27:13.626469 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-02-15 06:27:13.626484 | orchestrator | Sunday 15 February 2026 06:27:08 +0000 (0:00:01.746) 0:33:46.452 ******* 2026-02-15 06:27:13.626501 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-02-15 06:27:13.626510 | orchestrator | skipping: [testbed-node-2] => (item=dashboard)  2026-02-15 06:27:13.626520 | orchestrator | skipping: [testbed-node-2] => (item=prometheus)  2026-02-15 06:27:13.626530 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-02-15 06:27:13.626539 | orchestrator | skipping: [testbed-node-2] 2026-02-15 06:27:13.626549 | orchestrator | 2026-02-15 06:27:13.626558 | orchestrator | PLAY [Set osd flags] *********************************************************** 2026-02-15 06:27:13.626568 | orchestrator | 2026-02-15 06:27:13.626577 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:27:13.626586 | orchestrator | Sunday 15 February 2026 06:27:10 +0000 (0:00:01.945) 0:33:48.397 ******* 2026-02-15 06:27:13.626596 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:27:13.626605 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:27:13.626615 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:27:13.626624 | orchestrator | 2026-02-15 06:27:13.626634 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:27:13.626643 | orchestrator | Sunday 15 February 2026 06:27:11 +0000 (0:00:01.702) 0:33:50.100 ******* 2026-02-15 06:27:13.626652 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:27:13.626662 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:27:13.626671 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:27:13.626681 | orchestrator | 2026-02-15 06:27:13.626697 | orchestrator | TASK [Get pool list] *********************************************************** 2026-02-15 06:27:20.084713 | orchestrator | Sunday 15 February 2026 06:27:13 +0000 (0:00:01.613) 0:33:51.713 ******* 2026-02-15 06:27:20.084908 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:27:20.084936 | orchestrator | 2026-02-15 06:27:20.084959 | orchestrator | TASK [Get balancer module status] ********************************************** 2026-02-15 06:27:20.084980 | orchestrator | Sunday 15 February 2026 06:27:16 +0000 (0:00:02.995) 0:33:54.709 ******* 2026-02-15 06:27:20.084998 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:27:20.085015 | orchestrator | 2026-02-15 06:27:20.085036 | orchestrator | TASK [Set_fact pools_pgautoscaler_mode] **************************************** 2026-02-15 06:27:20.085054 | orchestrator | Sunday 15 February 2026 06:27:19 +0000 (0:00:02.906) 0:33:57.616 ******* 2026-02-15 06:27:20.085081 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 1, 'pool_name': '.mgr', 'create_time': '2026-02-15T03:51:00.433547+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 1, 'pg_placement_num': 1, 'pg_placement_num_target': 1, 'pg_num_target': 1, 'pg_num_pending': 1, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '20', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_num_max': 32, 'pg_num_min': 1}, 'application_metadata': {'mgr': {}}, 'read_balance': {'score_acting': 6.059999942779541, 'score_stable': 6.059999942779541, 'optimal_score': 0.33000001311302185, 'raw_score_acting': 2, 'raw_score_stable': 2, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:20.085194 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 2, 'pool_name': 'cephfs_data', 'create_time': '2026-02-15T03:52:16.207903+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '32', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'cephfs': {'data': 'cephfs'}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:20.085223 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 3, 'pool_name': 'cephfs_metadata', 'create_time': '2026-02-15T03:52:20.223602+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 16, 'pg_placement_num': 16, 'pg_placement_num_target': 16, 'pg_num_target': 16, 'pg_num_pending': 16, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '64', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '30', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {'pg_autoscale_bias': 4, 'pg_num_min': 16, 'recovery_priority': 5}, 'application_metadata': {'cephfs': {'metadata': 'cephfs'}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:20.085277 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 4, 'pool_name': 'default.rgw.buckets.data', 'create_time': '2026-02-15T03:53:21.677750+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:20.543273 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 5, 'pool_name': 'default.rgw.buckets.index', 'create_time': '2026-02-15T03:53:27.811211+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '71', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.309999942779541, 'score_stable': 1.309999942779541, 'optimal_score': 1, 'raw_score_acting': 1.309999942779541, 'raw_score_stable': 1.309999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:20.543395 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 6, 'pool_name': 'default.rgw.control', 'create_time': '2026-02-15T03:53:33.950016+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:20.543425 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 7, 'pool_name': 'default.rgw.log', 'create_time': '2026-02-15T03:53:39.957110+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '179', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '73', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:20.543444 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 8, 'pool_name': 'default.rgw.meta', 'create_time': '2026-02-15T03:53:46.282741+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:20.543458 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 9, 'pool_name': '.rgw.root', 'create_time': '2026-02-15T03:53:58.568076+0000', 'flags': 1, 'flags_names': 'hashpspool', 'type': 1, 'size': 2, 'min_size': 1, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'on', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '77', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '75', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 0, 'snap_epoch': 0, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rgw': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:22.522379 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 10, 'pool_name': 'backups', 'create_time': '2026-02-15T03:54:46.491108+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '114', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 114, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 2.059999942779541, 'score_stable': 2.059999942779541, 'optimal_score': 1, 'raw_score_acting': 2.059999942779541, 'raw_score_stable': 2.059999942779541, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:22.522504 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 11, 'pool_name': 'volumes', 'create_time': '2026-02-15T03:54:55.347173+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '123', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 123, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.690000057220459, 'score_stable': 1.690000057220459, 'optimal_score': 1, 'raw_score_acting': 1.690000057220459, 'raw_score_stable': 1.690000057220459, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:22.522550 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 12, 'pool_name': 'images', 'create_time': '2026-02-15T03:55:04.460192+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '189', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 6, 'snap_epoch': 189, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.1299999952316284, 'score_stable': 1.1299999952316284, 'optimal_score': 1, 'raw_score_acting': 1.1299999952316284, 'raw_score_stable': 1.1299999952316284, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:22.522565 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 13, 'pool_name': 'metrics', 'create_time': '2026-02-15T03:55:13.418350+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '138', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 138, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:27:22.522602 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'pool_id': 14, 'pool_name': 'vms', 'create_time': '2026-02-15T03:55:21.493990+0000', 'flags': 8193, 'flags_names': 'hashpspool,selfmanaged_snaps', 'type': 1, 'size': 3, 'min_size': 2, 'crush_rule': 0, 'peering_crush_bucket_count': 0, 'peering_crush_bucket_target': 0, 'peering_crush_bucket_barrier': 0, 'peering_crush_bucket_mandatory_member': 2147483647, 'object_hash': 2, 'pg_autoscale_mode': 'off', 'pg_num': 32, 'pg_placement_num': 32, 'pg_placement_num_target': 32, 'pg_num_target': 32, 'pg_num_pending': 32, 'last_pg_merge_meta': {'source_pgid': '0.0', 'ready_epoch': 0, 'last_epoch_started': 0, 'last_epoch_clean': 0, 'source_version': "0'0", 'target_version': "0'0"}, 'last_change': '147', 'last_force_op_resend': '0', 'last_force_op_resend_prenautilus': '0', 'last_force_op_resend_preluminous': '0', 'auid': 0, 'snap_mode': 'selfmanaged', 'snap_seq': 3, 'snap_epoch': 147, 'pool_snaps': [], 'removed_snaps': '[]', 'quota_max_bytes': 0, 'quota_max_objects': 0, 'tiers': [], 'tier_of': -1, 'read_tier': -1, 'write_tier': -1, 'cache_mode': 'none', 'target_max_bytes': 0, 'target_max_objects': 0, 'cache_target_dirty_ratio_micro': 400000, 'cache_target_dirty_high_ratio_micro': 600000, 'cache_target_full_ratio_micro': 800000, 'cache_min_flush_age': 0, 'cache_min_evict_age': 0, 'erasure_code_profile': '', 'hit_set_params': {'type': 'none'}, 'hit_set_period': 0, 'hit_set_count': 0, 'use_gmt_hitset': True, 'min_read_recency_for_promote': 0, 'min_write_recency_for_promote': 0, 'hit_set_grade_decay_rate': 0, 'hit_set_search_last_n': 0, 'grade_table': [], 'stripe_width': 0, 'expected_num_objects': 0, 'fast_read': False, 'options': {}, 'application_metadata': {'rbd': {}}, 'read_balance': {'score_acting': 1.5, 'score_stable': 1.5, 'optimal_score': 1, 'raw_score_acting': 1.5, 'raw_score_stable': 1.5, 'primary_affinity_weighted': 1, 'average_primary_affinity': 1, 'average_primary_affinity_weighted': 1}}) 2026-02-15 06:29:09.017102 | orchestrator | 2026-02-15 06:29:09.017222 | orchestrator | TASK [Disable balancer] ******************************************************** 2026-02-15 06:29:09.017239 | orchestrator | Sunday 15 February 2026 06:27:22 +0000 (0:00:03.005) 0:34:00.622 ******* 2026-02-15 06:29:09.017251 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:29:09.017263 | orchestrator | 2026-02-15 06:29:09.017274 | orchestrator | TASK [Disable pg autoscale on pools] ******************************************* 2026-02-15 06:29:09.017285 | orchestrator | Sunday 15 February 2026 06:27:25 +0000 (0:00:02.991) 0:34:03.614 ******* 2026-02-15 06:29:09.017296 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-15 06:29:09.017308 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-15 06:29:09.017320 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-15 06:29:09.017331 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-15 06:29:09.017343 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-15 06:29:09.017354 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-15 06:29:09.017365 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-15 06:29:09.017401 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-15 06:29:09.017414 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-15 06:29:09.017434 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-15 06:29:09.017454 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-15 06:29:09.017473 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-15 06:29:09.017493 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-15 06:29:09.017513 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-15 06:29:09.017533 | orchestrator | 2026-02-15 06:29:09.017552 | orchestrator | TASK [Set osd flags] *********************************************************** 2026-02-15 06:29:09.017571 | orchestrator | Sunday 15 February 2026 06:28:39 +0000 (0:01:13.797) 0:35:17.411 ******* 2026-02-15 06:29:09.017590 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-15 06:29:09.017611 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-15 06:29:09.017631 | orchestrator | 2026-02-15 06:29:09.017683 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-15 06:29:09.017702 | orchestrator | 2026-02-15 06:29:09.017722 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:29:09.017741 | orchestrator | Sunday 15 February 2026 06:28:45 +0000 (0:00:06.268) 0:35:23.680 ******* 2026-02-15 06:29:09.017759 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-15 06:29:09.017778 | orchestrator | 2026-02-15 06:29:09.017797 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:29:09.017817 | orchestrator | Sunday 15 February 2026 06:28:46 +0000 (0:00:01.096) 0:35:24.777 ******* 2026-02-15 06:29:09.017837 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:09.017858 | orchestrator | 2026-02-15 06:29:09.017877 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:29:09.017912 | orchestrator | Sunday 15 February 2026 06:28:48 +0000 (0:00:01.475) 0:35:26.252 ******* 2026-02-15 06:29:09.017933 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:09.017952 | orchestrator | 2026-02-15 06:29:09.017993 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:29:09.018088 | orchestrator | Sunday 15 February 2026 06:28:49 +0000 (0:00:01.133) 0:35:27.386 ******* 2026-02-15 06:29:09.018112 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:09.018134 | orchestrator | 2026-02-15 06:29:09.018156 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:29:09.018178 | orchestrator | Sunday 15 February 2026 06:28:50 +0000 (0:00:01.506) 0:35:28.893 ******* 2026-02-15 06:29:09.018201 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:09.018223 | orchestrator | 2026-02-15 06:29:09.018245 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:29:09.018267 | orchestrator | Sunday 15 February 2026 06:28:51 +0000 (0:00:01.142) 0:35:30.035 ******* 2026-02-15 06:29:09.018290 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:09.018312 | orchestrator | 2026-02-15 06:29:09.018334 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:29:09.018357 | orchestrator | Sunday 15 February 2026 06:28:53 +0000 (0:00:01.207) 0:35:31.242 ******* 2026-02-15 06:29:09.018380 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:09.018402 | orchestrator | 2026-02-15 06:29:09.018424 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:29:09.018446 | orchestrator | Sunday 15 February 2026 06:28:54 +0000 (0:00:01.141) 0:35:32.384 ******* 2026-02-15 06:29:09.018469 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:09.018491 | orchestrator | 2026-02-15 06:29:09.018513 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:29:09.018583 | orchestrator | Sunday 15 February 2026 06:28:55 +0000 (0:00:01.218) 0:35:33.603 ******* 2026-02-15 06:29:09.018605 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:09.018626 | orchestrator | 2026-02-15 06:29:09.018675 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:29:09.018697 | orchestrator | Sunday 15 February 2026 06:28:56 +0000 (0:00:01.153) 0:35:34.757 ******* 2026-02-15 06:29:09.018717 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:29:09.018737 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:29:09.018757 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:29:09.018776 | orchestrator | 2026-02-15 06:29:09.018797 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:29:09.018817 | orchestrator | Sunday 15 February 2026 06:28:58 +0000 (0:00:01.683) 0:35:36.440 ******* 2026-02-15 06:29:09.018838 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:09.018857 | orchestrator | 2026-02-15 06:29:09.018878 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:29:09.018896 | orchestrator | Sunday 15 February 2026 06:28:59 +0000 (0:00:01.329) 0:35:37.770 ******* 2026-02-15 06:29:09.018915 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:29:09.018933 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:29:09.018952 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:29:09.018972 | orchestrator | 2026-02-15 06:29:09.018992 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:29:09.019013 | orchestrator | Sunday 15 February 2026 06:29:02 +0000 (0:00:03.286) 0:35:41.057 ******* 2026-02-15 06:29:09.019033 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 06:29:09.019054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 06:29:09.019074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 06:29:09.019095 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:09.019115 | orchestrator | 2026-02-15 06:29:09.019134 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:29:09.019152 | orchestrator | Sunday 15 February 2026 06:29:04 +0000 (0:00:01.492) 0:35:42.550 ******* 2026-02-15 06:29:09.019171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:29:09.019194 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:29:09.019212 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:29:09.019231 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:09.019250 | orchestrator | 2026-02-15 06:29:09.019269 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:29:09.019288 | orchestrator | Sunday 15 February 2026 06:29:06 +0000 (0:00:01.953) 0:35:44.503 ******* 2026-02-15 06:29:09.019322 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:09.019361 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:09.019481 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:09.019512 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:09.019533 | orchestrator | 2026-02-15 06:29:09.019554 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:29:09.019576 | orchestrator | Sunday 15 February 2026 06:29:07 +0000 (0:00:01.321) 0:35:45.825 ******* 2026-02-15 06:29:09.019621 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:29:00.236217', 'end': '2026-02-15 06:29:00.293381', 'delta': '0:00:00.057164', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:29:27.874825 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:29:00.814702', 'end': '2026-02-15 06:29:00.859743', 'delta': '0:00:00.045041', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:29:27.874943 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:29:01.696896', 'end': '2026-02-15 06:29:01.742817', 'delta': '0:00:00.045921', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:29:27.874961 | orchestrator | 2026-02-15 06:29:27.874976 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:29:27.874988 | orchestrator | Sunday 15 February 2026 06:29:09 +0000 (0:00:01.287) 0:35:47.113 ******* 2026-02-15 06:29:27.874999 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:27.875012 | orchestrator | 2026-02-15 06:29:27.875023 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:29:27.875059 | orchestrator | Sunday 15 February 2026 06:29:10 +0000 (0:00:01.687) 0:35:48.800 ******* 2026-02-15 06:29:27.875072 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:27.875084 | orchestrator | 2026-02-15 06:29:27.875094 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:29:27.875105 | orchestrator | Sunday 15 February 2026 06:29:12 +0000 (0:00:01.414) 0:35:50.214 ******* 2026-02-15 06:29:27.875116 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:27.875126 | orchestrator | 2026-02-15 06:29:27.875137 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:29:27.875161 | orchestrator | Sunday 15 February 2026 06:29:13 +0000 (0:00:01.228) 0:35:51.443 ******* 2026-02-15 06:29:27.875173 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:29:27.875184 | orchestrator | 2026-02-15 06:29:27.875194 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:29:27.875205 | orchestrator | Sunday 15 February 2026 06:29:15 +0000 (0:00:02.103) 0:35:53.546 ******* 2026-02-15 06:29:27.875216 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:27.875226 | orchestrator | 2026-02-15 06:29:27.875237 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:29:27.875248 | orchestrator | Sunday 15 February 2026 06:29:16 +0000 (0:00:01.163) 0:35:54.710 ******* 2026-02-15 06:29:27.875258 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:27.875269 | orchestrator | 2026-02-15 06:29:27.875282 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:29:27.875296 | orchestrator | Sunday 15 February 2026 06:29:17 +0000 (0:00:01.145) 0:35:55.855 ******* 2026-02-15 06:29:27.875308 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:27.875320 | orchestrator | 2026-02-15 06:29:27.875332 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:29:27.875345 | orchestrator | Sunday 15 February 2026 06:29:18 +0000 (0:00:01.230) 0:35:57.086 ******* 2026-02-15 06:29:27.875357 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:27.875369 | orchestrator | 2026-02-15 06:29:27.875381 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:29:27.875393 | orchestrator | Sunday 15 February 2026 06:29:20 +0000 (0:00:01.139) 0:35:58.226 ******* 2026-02-15 06:29:27.875406 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:27.875418 | orchestrator | 2026-02-15 06:29:27.875431 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:29:27.875443 | orchestrator | Sunday 15 February 2026 06:29:21 +0000 (0:00:01.147) 0:35:59.374 ******* 2026-02-15 06:29:27.875455 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:27.875467 | orchestrator | 2026-02-15 06:29:27.875480 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:29:27.875492 | orchestrator | Sunday 15 February 2026 06:29:22 +0000 (0:00:01.168) 0:36:00.543 ******* 2026-02-15 06:29:27.875504 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:27.875516 | orchestrator | 2026-02-15 06:29:27.875528 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:29:27.875541 | orchestrator | Sunday 15 February 2026 06:29:23 +0000 (0:00:01.181) 0:36:01.725 ******* 2026-02-15 06:29:27.875553 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:27.875565 | orchestrator | 2026-02-15 06:29:27.875577 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:29:27.875590 | orchestrator | Sunday 15 February 2026 06:29:24 +0000 (0:00:01.185) 0:36:02.910 ******* 2026-02-15 06:29:27.875619 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:27.875656 | orchestrator | 2026-02-15 06:29:27.875668 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:29:27.875679 | orchestrator | Sunday 15 February 2026 06:29:26 +0000 (0:00:01.204) 0:36:04.115 ******* 2026-02-15 06:29:27.875690 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:27.875700 | orchestrator | 2026-02-15 06:29:27.875711 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:29:27.875731 | orchestrator | Sunday 15 February 2026 06:29:27 +0000 (0:00:01.184) 0:36:05.300 ******* 2026-02-15 06:29:27.875744 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:29:27.875758 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'uuids': ['d94e5f79-6313-45be-bfeb-6c020052505d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d']}})  2026-02-15 06:29:27.875771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30e735a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:29:27.875790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769']}})  2026-02-15 06:29:27.875804 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:29:27.875815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:29:27.875835 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:29:29.609460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:29:29.609571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV', 'dm-uuid-CRYPT-LUKS2-00e62f5af87144e797787951ba7c7c75-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:29:29.609589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:29:29.609620 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'uuids': ['00e62f5a-f871-44e7-9778-7951ba7c7c75'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV']}})  2026-02-15 06:29:29.609686 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55']}})  2026-02-15 06:29:29.609699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:29:29.609735 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cdab0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:29:29.609771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:29:29.609789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:29:29.609802 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d', 'dm-uuid-CRYPT-LUKS2-d94e5f79631345bebfeb6c020052505d-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:29:29.609814 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:29.609827 | orchestrator | 2026-02-15 06:29:29.609840 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:29:29.609851 | orchestrator | Sunday 15 February 2026 06:29:29 +0000 (0:00:02.146) 0:36:07.447 ******* 2026-02-15 06:29:29.609863 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:29.609892 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'uuids': ['d94e5f79-6313-45be-bfeb-6c020052505d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.848771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30e735a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.848891 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.848913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.848926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.848958 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.848988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.849000 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV', 'dm-uuid-CRYPT-LUKS2-00e62f5af87144e797787951ba7c7c75-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.849018 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.849029 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'uuids': ['00e62f5a-f871-44e7-9778-7951ba7c7c75'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.849043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:30.849070 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:50.817053 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cdab0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:50.817192 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:50.817248 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:50.817280 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d', 'dm-uuid-CRYPT-LUKS2-d94e5f79631345bebfeb6c020052505d-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:29:50.817294 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:50.817307 | orchestrator | 2026-02-15 06:29:50.817319 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:29:50.817332 | orchestrator | Sunday 15 February 2026 06:29:30 +0000 (0:00:01.499) 0:36:08.946 ******* 2026-02-15 06:29:50.817343 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:50.817355 | orchestrator | 2026-02-15 06:29:50.817366 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:29:50.817377 | orchestrator | Sunday 15 February 2026 06:29:32 +0000 (0:00:01.552) 0:36:10.499 ******* 2026-02-15 06:29:50.817387 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:50.817398 | orchestrator | 2026-02-15 06:29:50.817410 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:29:50.817420 | orchestrator | Sunday 15 February 2026 06:29:33 +0000 (0:00:01.148) 0:36:11.648 ******* 2026-02-15 06:29:50.817431 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:50.817442 | orchestrator | 2026-02-15 06:29:50.817453 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:29:50.817464 | orchestrator | Sunday 15 February 2026 06:29:34 +0000 (0:00:01.442) 0:36:13.091 ******* 2026-02-15 06:29:50.817474 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:50.817486 | orchestrator | 2026-02-15 06:29:50.817505 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:29:50.817523 | orchestrator | Sunday 15 February 2026 06:29:36 +0000 (0:00:01.127) 0:36:14.219 ******* 2026-02-15 06:29:50.817541 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:50.817560 | orchestrator | 2026-02-15 06:29:50.817579 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:29:50.817610 | orchestrator | Sunday 15 February 2026 06:29:37 +0000 (0:00:01.206) 0:36:15.426 ******* 2026-02-15 06:29:50.817656 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:50.817669 | orchestrator | 2026-02-15 06:29:50.817682 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:29:50.817695 | orchestrator | Sunday 15 February 2026 06:29:38 +0000 (0:00:01.187) 0:36:16.613 ******* 2026-02-15 06:29:50.817718 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-15 06:29:50.817731 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-15 06:29:50.817743 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-15 06:29:50.817756 | orchestrator | 2026-02-15 06:29:50.817768 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:29:50.817780 | orchestrator | Sunday 15 February 2026 06:29:40 +0000 (0:00:02.063) 0:36:18.677 ******* 2026-02-15 06:29:50.817793 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 06:29:50.817805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 06:29:50.817817 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 06:29:50.817830 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:50.817842 | orchestrator | 2026-02-15 06:29:50.817852 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:29:50.817865 | orchestrator | Sunday 15 February 2026 06:29:41 +0000 (0:00:01.144) 0:36:19.821 ******* 2026-02-15 06:29:50.817885 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-15 06:29:50.817905 | orchestrator | 2026-02-15 06:29:50.817925 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:29:50.817948 | orchestrator | Sunday 15 February 2026 06:29:42 +0000 (0:00:01.159) 0:36:20.981 ******* 2026-02-15 06:29:50.817970 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:50.817982 | orchestrator | 2026-02-15 06:29:50.817993 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:29:50.818004 | orchestrator | Sunday 15 February 2026 06:29:44 +0000 (0:00:01.248) 0:36:22.229 ******* 2026-02-15 06:29:50.818071 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:50.818084 | orchestrator | 2026-02-15 06:29:50.818095 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:29:50.818130 | orchestrator | Sunday 15 February 2026 06:29:45 +0000 (0:00:01.327) 0:36:23.557 ******* 2026-02-15 06:29:50.818142 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:50.818152 | orchestrator | 2026-02-15 06:29:50.818163 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:29:50.818174 | orchestrator | Sunday 15 February 2026 06:29:46 +0000 (0:00:01.170) 0:36:24.728 ******* 2026-02-15 06:29:50.818185 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:29:50.818195 | orchestrator | 2026-02-15 06:29:50.818206 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:29:50.818216 | orchestrator | Sunday 15 February 2026 06:29:47 +0000 (0:00:01.276) 0:36:26.004 ******* 2026-02-15 06:29:50.818227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:29:50.818246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:29:50.818265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:29:50.818284 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:50.818305 | orchestrator | 2026-02-15 06:29:50.818327 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:29:50.818346 | orchestrator | Sunday 15 February 2026 06:29:49 +0000 (0:00:01.437) 0:36:27.441 ******* 2026-02-15 06:29:50.818366 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:29:50.818386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:29:50.818404 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:29:50.818416 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:29:50.818426 | orchestrator | 2026-02-15 06:29:50.818448 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:30:40.117945 | orchestrator | Sunday 15 February 2026 06:29:50 +0000 (0:00:01.460) 0:36:28.901 ******* 2026-02-15 06:30:40.118145 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:30:40.118210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:30:40.118229 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:30:40.118245 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.118262 | orchestrator | 2026-02-15 06:30:40.118280 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:30:40.118299 | orchestrator | Sunday 15 February 2026 06:29:52 +0000 (0:00:01.511) 0:36:30.413 ******* 2026-02-15 06:30:40.118318 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.118336 | orchestrator | 2026-02-15 06:30:40.118356 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:30:40.118375 | orchestrator | Sunday 15 February 2026 06:29:53 +0000 (0:00:01.216) 0:36:31.630 ******* 2026-02-15 06:30:40.118394 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 06:30:40.118408 | orchestrator | 2026-02-15 06:30:40.118419 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:30:40.118430 | orchestrator | Sunday 15 February 2026 06:29:54 +0000 (0:00:01.363) 0:36:32.993 ******* 2026-02-15 06:30:40.118441 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:30:40.118453 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:30:40.118463 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:30:40.118476 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-15 06:30:40.118489 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:30:40.118518 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:30:40.118531 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:30:40.118544 | orchestrator | 2026-02-15 06:30:40.118557 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:30:40.118569 | orchestrator | Sunday 15 February 2026 06:29:57 +0000 (0:00:02.155) 0:36:35.148 ******* 2026-02-15 06:30:40.118615 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:30:40.118629 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:30:40.118641 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:30:40.118654 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-15 06:30:40.118666 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:30:40.118678 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:30:40.118690 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:30:40.118702 | orchestrator | 2026-02-15 06:30:40.118714 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-15 06:30:40.118727 | orchestrator | Sunday 15 February 2026 06:29:59 +0000 (0:00:02.710) 0:36:37.859 ******* 2026-02-15 06:30:40.118739 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.118752 | orchestrator | 2026-02-15 06:30:40.118765 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-15 06:30:40.118777 | orchestrator | Sunday 15 February 2026 06:30:01 +0000 (0:00:01.616) 0:36:39.476 ******* 2026-02-15 06:30:40.118790 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.118802 | orchestrator | 2026-02-15 06:30:40.118814 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-15 06:30:40.118827 | orchestrator | Sunday 15 February 2026 06:30:02 +0000 (0:00:01.171) 0:36:40.647 ******* 2026-02-15 06:30:40.118839 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.118849 | orchestrator | 2026-02-15 06:30:40.118860 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-15 06:30:40.118883 | orchestrator | Sunday 15 February 2026 06:30:04 +0000 (0:00:01.783) 0:36:42.431 ******* 2026-02-15 06:30:40.118894 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-15 06:30:40.118905 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-15 06:30:40.118916 | orchestrator | 2026-02-15 06:30:40.118927 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:30:40.118937 | orchestrator | Sunday 15 February 2026 06:30:08 +0000 (0:00:04.084) 0:36:46.516 ******* 2026-02-15 06:30:40.118948 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-15 06:30:40.118960 | orchestrator | 2026-02-15 06:30:40.118970 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:30:40.118981 | orchestrator | Sunday 15 February 2026 06:30:09 +0000 (0:00:01.118) 0:36:47.635 ******* 2026-02-15 06:30:40.118991 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-15 06:30:40.119002 | orchestrator | 2026-02-15 06:30:40.119013 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:30:40.119023 | orchestrator | Sunday 15 February 2026 06:30:10 +0000 (0:00:01.170) 0:36:48.805 ******* 2026-02-15 06:30:40.119034 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119045 | orchestrator | 2026-02-15 06:30:40.119056 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:30:40.119066 | orchestrator | Sunday 15 February 2026 06:30:11 +0000 (0:00:01.130) 0:36:49.936 ******* 2026-02-15 06:30:40.119077 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.119088 | orchestrator | 2026-02-15 06:30:40.119098 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:30:40.119130 | orchestrator | Sunday 15 February 2026 06:30:13 +0000 (0:00:01.918) 0:36:51.855 ******* 2026-02-15 06:30:40.119141 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.119152 | orchestrator | 2026-02-15 06:30:40.119163 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:30:40.119174 | orchestrator | Sunday 15 February 2026 06:30:15 +0000 (0:00:01.525) 0:36:53.380 ******* 2026-02-15 06:30:40.119184 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.119195 | orchestrator | 2026-02-15 06:30:40.119206 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:30:40.119217 | orchestrator | Sunday 15 February 2026 06:30:16 +0000 (0:00:01.520) 0:36:54.901 ******* 2026-02-15 06:30:40.119227 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119238 | orchestrator | 2026-02-15 06:30:40.119249 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:30:40.119260 | orchestrator | Sunday 15 February 2026 06:30:17 +0000 (0:00:01.152) 0:36:56.053 ******* 2026-02-15 06:30:40.119270 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119281 | orchestrator | 2026-02-15 06:30:40.119292 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:30:40.119303 | orchestrator | Sunday 15 February 2026 06:30:19 +0000 (0:00:01.208) 0:36:57.262 ******* 2026-02-15 06:30:40.119314 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119324 | orchestrator | 2026-02-15 06:30:40.119335 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:30:40.119346 | orchestrator | Sunday 15 February 2026 06:30:20 +0000 (0:00:01.200) 0:36:58.462 ******* 2026-02-15 06:30:40.119357 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.119367 | orchestrator | 2026-02-15 06:30:40.119378 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:30:40.119389 | orchestrator | Sunday 15 February 2026 06:30:21 +0000 (0:00:01.539) 0:37:00.002 ******* 2026-02-15 06:30:40.119400 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.119410 | orchestrator | 2026-02-15 06:30:40.119427 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:30:40.119438 | orchestrator | Sunday 15 February 2026 06:30:23 +0000 (0:00:01.621) 0:37:01.623 ******* 2026-02-15 06:30:40.119456 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119467 | orchestrator | 2026-02-15 06:30:40.119477 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:30:40.119488 | orchestrator | Sunday 15 February 2026 06:30:24 +0000 (0:00:01.179) 0:37:02.803 ******* 2026-02-15 06:30:40.119499 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119509 | orchestrator | 2026-02-15 06:30:40.119520 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:30:40.119531 | orchestrator | Sunday 15 February 2026 06:30:25 +0000 (0:00:01.185) 0:37:03.989 ******* 2026-02-15 06:30:40.119541 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.119552 | orchestrator | 2026-02-15 06:30:40.119562 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:30:40.119573 | orchestrator | Sunday 15 February 2026 06:30:27 +0000 (0:00:01.170) 0:37:05.160 ******* 2026-02-15 06:30:40.119623 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.119636 | orchestrator | 2026-02-15 06:30:40.119647 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:30:40.119657 | orchestrator | Sunday 15 February 2026 06:30:28 +0000 (0:00:01.133) 0:37:06.294 ******* 2026-02-15 06:30:40.119668 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.119679 | orchestrator | 2026-02-15 06:30:40.119689 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:30:40.119700 | orchestrator | Sunday 15 February 2026 06:30:29 +0000 (0:00:01.204) 0:37:07.499 ******* 2026-02-15 06:30:40.119711 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119721 | orchestrator | 2026-02-15 06:30:40.119732 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:30:40.119743 | orchestrator | Sunday 15 February 2026 06:30:30 +0000 (0:00:01.199) 0:37:08.699 ******* 2026-02-15 06:30:40.119753 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119764 | orchestrator | 2026-02-15 06:30:40.119775 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:30:40.119785 | orchestrator | Sunday 15 February 2026 06:30:31 +0000 (0:00:01.146) 0:37:09.845 ******* 2026-02-15 06:30:40.119796 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119807 | orchestrator | 2026-02-15 06:30:40.119817 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:30:40.119828 | orchestrator | Sunday 15 February 2026 06:30:32 +0000 (0:00:01.124) 0:37:10.970 ******* 2026-02-15 06:30:40.119838 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.119849 | orchestrator | 2026-02-15 06:30:40.119860 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:30:40.119870 | orchestrator | Sunday 15 February 2026 06:30:34 +0000 (0:00:01.234) 0:37:12.205 ******* 2026-02-15 06:30:40.119881 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:30:40.119891 | orchestrator | 2026-02-15 06:30:40.119902 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:30:40.119913 | orchestrator | Sunday 15 February 2026 06:30:35 +0000 (0:00:01.181) 0:37:13.387 ******* 2026-02-15 06:30:40.119923 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119934 | orchestrator | 2026-02-15 06:30:40.119944 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:30:40.119955 | orchestrator | Sunday 15 February 2026 06:30:36 +0000 (0:00:01.303) 0:37:14.690 ******* 2026-02-15 06:30:40.119966 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.119976 | orchestrator | 2026-02-15 06:30:40.119996 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:30:40.120015 | orchestrator | Sunday 15 February 2026 06:30:37 +0000 (0:00:01.208) 0:37:15.899 ******* 2026-02-15 06:30:40.120043 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.120066 | orchestrator | 2026-02-15 06:30:40.120083 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:30:40.120101 | orchestrator | Sunday 15 February 2026 06:30:38 +0000 (0:00:01.189) 0:37:17.089 ******* 2026-02-15 06:30:40.120134 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:30:40.120151 | orchestrator | 2026-02-15 06:30:40.120181 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:31:29.624627 | orchestrator | Sunday 15 February 2026 06:30:40 +0000 (0:00:01.120) 0:37:18.209 ******* 2026-02-15 06:31:29.624727 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.624737 | orchestrator | 2026-02-15 06:31:29.624745 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:31:29.624752 | orchestrator | Sunday 15 February 2026 06:30:41 +0000 (0:00:01.239) 0:37:19.449 ******* 2026-02-15 06:31:29.624759 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.624765 | orchestrator | 2026-02-15 06:31:29.624773 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:31:29.624780 | orchestrator | Sunday 15 February 2026 06:30:42 +0000 (0:00:01.124) 0:37:20.573 ******* 2026-02-15 06:31:29.624787 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.624794 | orchestrator | 2026-02-15 06:31:29.624801 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:31:29.624809 | orchestrator | Sunday 15 February 2026 06:30:43 +0000 (0:00:01.270) 0:37:21.844 ******* 2026-02-15 06:31:29.624816 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.624823 | orchestrator | 2026-02-15 06:31:29.624830 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:31:29.624836 | orchestrator | Sunday 15 February 2026 06:30:44 +0000 (0:00:01.136) 0:37:22.980 ******* 2026-02-15 06:31:29.624843 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.624849 | orchestrator | 2026-02-15 06:31:29.624856 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:31:29.624863 | orchestrator | Sunday 15 February 2026 06:30:46 +0000 (0:00:01.177) 0:37:24.157 ******* 2026-02-15 06:31:29.624869 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.624876 | orchestrator | 2026-02-15 06:31:29.624882 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:31:29.624902 | orchestrator | Sunday 15 February 2026 06:30:47 +0000 (0:00:01.202) 0:37:25.360 ******* 2026-02-15 06:31:29.624909 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.624916 | orchestrator | 2026-02-15 06:31:29.624923 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:31:29.624929 | orchestrator | Sunday 15 February 2026 06:30:48 +0000 (0:00:01.148) 0:37:26.509 ******* 2026-02-15 06:31:29.624936 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.624942 | orchestrator | 2026-02-15 06:31:29.624949 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:31:29.624955 | orchestrator | Sunday 15 February 2026 06:30:49 +0000 (0:00:01.138) 0:37:27.648 ******* 2026-02-15 06:31:29.624962 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:31:29.624970 | orchestrator | 2026-02-15 06:31:29.624976 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:31:29.624983 | orchestrator | Sunday 15 February 2026 06:30:51 +0000 (0:00:01.935) 0:37:29.583 ******* 2026-02-15 06:31:29.624990 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:31:29.624996 | orchestrator | 2026-02-15 06:31:29.625003 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:31:29.625010 | orchestrator | Sunday 15 February 2026 06:30:53 +0000 (0:00:02.356) 0:37:31.939 ******* 2026-02-15 06:31:29.625016 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-15 06:31:29.625024 | orchestrator | 2026-02-15 06:31:29.625030 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:31:29.625036 | orchestrator | Sunday 15 February 2026 06:30:55 +0000 (0:00:01.188) 0:37:33.128 ******* 2026-02-15 06:31:29.625043 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625050 | orchestrator | 2026-02-15 06:31:29.625056 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:31:29.625063 | orchestrator | Sunday 15 February 2026 06:30:56 +0000 (0:00:01.129) 0:37:34.258 ******* 2026-02-15 06:31:29.625091 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625098 | orchestrator | 2026-02-15 06:31:29.625104 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:31:29.625110 | orchestrator | Sunday 15 February 2026 06:30:57 +0000 (0:00:01.130) 0:37:35.388 ******* 2026-02-15 06:31:29.625116 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:31:29.625122 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:31:29.625128 | orchestrator | 2026-02-15 06:31:29.625135 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:31:29.625141 | orchestrator | Sunday 15 February 2026 06:30:59 +0000 (0:00:01.818) 0:37:37.207 ******* 2026-02-15 06:31:29.625148 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:31:29.625154 | orchestrator | 2026-02-15 06:31:29.625161 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:31:29.625168 | orchestrator | Sunday 15 February 2026 06:31:00 +0000 (0:00:01.499) 0:37:38.707 ******* 2026-02-15 06:31:29.625175 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625182 | orchestrator | 2026-02-15 06:31:29.625188 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:31:29.625195 | orchestrator | Sunday 15 February 2026 06:31:01 +0000 (0:00:01.163) 0:37:39.870 ******* 2026-02-15 06:31:29.625202 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625209 | orchestrator | 2026-02-15 06:31:29.625216 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:31:29.625223 | orchestrator | Sunday 15 February 2026 06:31:02 +0000 (0:00:01.143) 0:37:41.014 ******* 2026-02-15 06:31:29.625230 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625237 | orchestrator | 2026-02-15 06:31:29.625243 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:31:29.625250 | orchestrator | Sunday 15 February 2026 06:31:04 +0000 (0:00:01.188) 0:37:42.202 ******* 2026-02-15 06:31:29.625257 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-15 06:31:29.625264 | orchestrator | 2026-02-15 06:31:29.625271 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:31:29.625291 | orchestrator | Sunday 15 February 2026 06:31:05 +0000 (0:00:01.180) 0:37:43.383 ******* 2026-02-15 06:31:29.625298 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:31:29.625305 | orchestrator | 2026-02-15 06:31:29.625312 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:31:29.625319 | orchestrator | Sunday 15 February 2026 06:31:07 +0000 (0:00:01.736) 0:37:45.119 ******* 2026-02-15 06:31:29.625326 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:31:29.625333 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:31:29.625339 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:31:29.625346 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625352 | orchestrator | 2026-02-15 06:31:29.625359 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:31:29.625366 | orchestrator | Sunday 15 February 2026 06:31:08 +0000 (0:00:01.177) 0:37:46.297 ******* 2026-02-15 06:31:29.625373 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625380 | orchestrator | 2026-02-15 06:31:29.625387 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:31:29.625394 | orchestrator | Sunday 15 February 2026 06:31:09 +0000 (0:00:01.144) 0:37:47.441 ******* 2026-02-15 06:31:29.625401 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625407 | orchestrator | 2026-02-15 06:31:29.625414 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:31:29.625420 | orchestrator | Sunday 15 February 2026 06:31:10 +0000 (0:00:01.198) 0:37:48.640 ******* 2026-02-15 06:31:29.625434 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625441 | orchestrator | 2026-02-15 06:31:29.625448 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:31:29.625460 | orchestrator | Sunday 15 February 2026 06:31:11 +0000 (0:00:01.140) 0:37:49.781 ******* 2026-02-15 06:31:29.625468 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625474 | orchestrator | 2026-02-15 06:31:29.625482 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:31:29.625488 | orchestrator | Sunday 15 February 2026 06:31:12 +0000 (0:00:01.170) 0:37:50.951 ******* 2026-02-15 06:31:29.625495 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625502 | orchestrator | 2026-02-15 06:31:29.625509 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:31:29.625516 | orchestrator | Sunday 15 February 2026 06:31:14 +0000 (0:00:01.156) 0:37:52.108 ******* 2026-02-15 06:31:29.625522 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:31:29.625529 | orchestrator | 2026-02-15 06:31:29.625536 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:31:29.625543 | orchestrator | Sunday 15 February 2026 06:31:16 +0000 (0:00:02.431) 0:37:54.539 ******* 2026-02-15 06:31:29.625573 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:31:29.625579 | orchestrator | 2026-02-15 06:31:29.625586 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:31:29.625592 | orchestrator | Sunday 15 February 2026 06:31:17 +0000 (0:00:01.139) 0:37:55.679 ******* 2026-02-15 06:31:29.625599 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-15 06:31:29.625606 | orchestrator | 2026-02-15 06:31:29.625613 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:31:29.625619 | orchestrator | Sunday 15 February 2026 06:31:18 +0000 (0:00:01.132) 0:37:56.811 ******* 2026-02-15 06:31:29.625625 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625632 | orchestrator | 2026-02-15 06:31:29.625638 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:31:29.625645 | orchestrator | Sunday 15 February 2026 06:31:19 +0000 (0:00:01.203) 0:37:58.015 ******* 2026-02-15 06:31:29.625652 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625658 | orchestrator | 2026-02-15 06:31:29.625665 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:31:29.625671 | orchestrator | Sunday 15 February 2026 06:31:21 +0000 (0:00:01.227) 0:37:59.243 ******* 2026-02-15 06:31:29.625677 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625684 | orchestrator | 2026-02-15 06:31:29.625690 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:31:29.625697 | orchestrator | Sunday 15 February 2026 06:31:22 +0000 (0:00:01.197) 0:38:00.440 ******* 2026-02-15 06:31:29.625703 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625710 | orchestrator | 2026-02-15 06:31:29.625717 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:31:29.625724 | orchestrator | Sunday 15 February 2026 06:31:23 +0000 (0:00:01.432) 0:38:01.873 ******* 2026-02-15 06:31:29.625730 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625737 | orchestrator | 2026-02-15 06:31:29.625743 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:31:29.625749 | orchestrator | Sunday 15 February 2026 06:31:24 +0000 (0:00:01.166) 0:38:03.040 ******* 2026-02-15 06:31:29.625757 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625763 | orchestrator | 2026-02-15 06:31:29.625770 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:31:29.625776 | orchestrator | Sunday 15 February 2026 06:31:26 +0000 (0:00:01.183) 0:38:04.223 ******* 2026-02-15 06:31:29.625783 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625789 | orchestrator | 2026-02-15 06:31:29.625795 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:31:29.625807 | orchestrator | Sunday 15 February 2026 06:31:27 +0000 (0:00:01.135) 0:38:05.359 ******* 2026-02-15 06:31:29.625813 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:31:29.625820 | orchestrator | 2026-02-15 06:31:29.625825 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:31:29.625831 | orchestrator | Sunday 15 February 2026 06:31:28 +0000 (0:00:01.168) 0:38:06.528 ******* 2026-02-15 06:31:29.625837 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:31:29.625844 | orchestrator | 2026-02-15 06:31:29.625850 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:31:29.625863 | orchestrator | Sunday 15 February 2026 06:31:29 +0000 (0:00:01.191) 0:38:07.719 ******* 2026-02-15 06:32:20.059659 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-15 06:32:20.059734 | orchestrator | 2026-02-15 06:32:20.059740 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:32:20.059745 | orchestrator | Sunday 15 February 2026 06:31:30 +0000 (0:00:01.111) 0:38:08.830 ******* 2026-02-15 06:32:20.059750 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-15 06:32:20.059755 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-15 06:32:20.059759 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-15 06:32:20.059763 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-15 06:32:20.059766 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-15 06:32:20.059770 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-15 06:32:20.059774 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-15 06:32:20.059779 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:32:20.059783 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:32:20.059787 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:32:20.059791 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:32:20.059794 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:32:20.059798 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:32:20.059802 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:32:20.059811 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-15 06:32:20.059815 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-15 06:32:20.059818 | orchestrator | 2026-02-15 06:32:20.059822 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:32:20.059826 | orchestrator | Sunday 15 February 2026 06:31:37 +0000 (0:00:06.373) 0:38:15.204 ******* 2026-02-15 06:32:20.059830 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-15 06:32:20.059834 | orchestrator | 2026-02-15 06:32:20.059837 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-15 06:32:20.059841 | orchestrator | Sunday 15 February 2026 06:31:38 +0000 (0:00:01.531) 0:38:16.736 ******* 2026-02-15 06:32:20.059845 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:32:20.059850 | orchestrator | 2026-02-15 06:32:20.059854 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-15 06:32:20.059858 | orchestrator | Sunday 15 February 2026 06:31:40 +0000 (0:00:01.522) 0:38:18.258 ******* 2026-02-15 06:32:20.059862 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:32:20.059866 | orchestrator | 2026-02-15 06:32:20.059869 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:32:20.059873 | orchestrator | Sunday 15 February 2026 06:31:42 +0000 (0:00:02.002) 0:38:20.261 ******* 2026-02-15 06:32:20.059877 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.059892 | orchestrator | 2026-02-15 06:32:20.059896 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:32:20.059899 | orchestrator | Sunday 15 February 2026 06:31:43 +0000 (0:00:01.210) 0:38:21.471 ******* 2026-02-15 06:32:20.059903 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.059907 | orchestrator | 2026-02-15 06:32:20.059910 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:32:20.059914 | orchestrator | Sunday 15 February 2026 06:31:44 +0000 (0:00:01.181) 0:38:22.653 ******* 2026-02-15 06:32:20.059918 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.059921 | orchestrator | 2026-02-15 06:32:20.059925 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:32:20.059929 | orchestrator | Sunday 15 February 2026 06:31:45 +0000 (0:00:01.181) 0:38:23.835 ******* 2026-02-15 06:32:20.059932 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.059936 | orchestrator | 2026-02-15 06:32:20.059940 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:32:20.059943 | orchestrator | Sunday 15 February 2026 06:31:46 +0000 (0:00:01.108) 0:38:24.944 ******* 2026-02-15 06:32:20.059947 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.059951 | orchestrator | 2026-02-15 06:32:20.059954 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:32:20.059958 | orchestrator | Sunday 15 February 2026 06:31:48 +0000 (0:00:01.170) 0:38:26.115 ******* 2026-02-15 06:32:20.059962 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.059966 | orchestrator | 2026-02-15 06:32:20.059969 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:32:20.059973 | orchestrator | Sunday 15 February 2026 06:31:49 +0000 (0:00:01.135) 0:38:27.250 ******* 2026-02-15 06:32:20.059977 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.059980 | orchestrator | 2026-02-15 06:32:20.059984 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:32:20.059988 | orchestrator | Sunday 15 February 2026 06:31:50 +0000 (0:00:01.218) 0:38:28.469 ******* 2026-02-15 06:32:20.059992 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.059995 | orchestrator | 2026-02-15 06:32:20.059999 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:32:20.060003 | orchestrator | Sunday 15 February 2026 06:31:51 +0000 (0:00:01.116) 0:38:29.586 ******* 2026-02-15 06:32:20.060007 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.060010 | orchestrator | 2026-02-15 06:32:20.060022 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:32:20.060026 | orchestrator | Sunday 15 February 2026 06:31:52 +0000 (0:00:01.137) 0:38:30.724 ******* 2026-02-15 06:32:20.060030 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.060034 | orchestrator | 2026-02-15 06:32:20.060037 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:32:20.060041 | orchestrator | Sunday 15 February 2026 06:31:53 +0000 (0:00:01.196) 0:38:31.920 ******* 2026-02-15 06:32:20.060045 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:32:20.060049 | orchestrator | 2026-02-15 06:32:20.060052 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:32:20.060056 | orchestrator | Sunday 15 February 2026 06:31:55 +0000 (0:00:01.199) 0:38:33.120 ******* 2026-02-15 06:32:20.060060 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-15 06:32:20.060064 | orchestrator | 2026-02-15 06:32:20.060067 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:32:20.060071 | orchestrator | Sunday 15 February 2026 06:31:59 +0000 (0:00:04.429) 0:38:37.549 ******* 2026-02-15 06:32:20.060075 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:32:20.060078 | orchestrator | 2026-02-15 06:32:20.060085 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:32:20.060089 | orchestrator | Sunday 15 February 2026 06:32:00 +0000 (0:00:01.342) 0:38:38.891 ******* 2026-02-15 06:32:20.060094 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-15 06:32:20.060101 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-15 06:32:20.060106 | orchestrator | 2026-02-15 06:32:20.060110 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:32:20.060114 | orchestrator | Sunday 15 February 2026 06:32:08 +0000 (0:00:07.556) 0:38:46.448 ******* 2026-02-15 06:32:20.060118 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.060121 | orchestrator | 2026-02-15 06:32:20.060125 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:32:20.060129 | orchestrator | Sunday 15 February 2026 06:32:09 +0000 (0:00:01.222) 0:38:47.671 ******* 2026-02-15 06:32:20.060132 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.060136 | orchestrator | 2026-02-15 06:32:20.060140 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:32:20.060143 | orchestrator | Sunday 15 February 2026 06:32:10 +0000 (0:00:01.226) 0:38:48.897 ******* 2026-02-15 06:32:20.060147 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.060151 | orchestrator | 2026-02-15 06:32:20.060154 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:32:20.060158 | orchestrator | Sunday 15 February 2026 06:32:11 +0000 (0:00:01.194) 0:38:50.092 ******* 2026-02-15 06:32:20.060162 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.060166 | orchestrator | 2026-02-15 06:32:20.060169 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:32:20.060173 | orchestrator | Sunday 15 February 2026 06:32:13 +0000 (0:00:01.162) 0:38:51.254 ******* 2026-02-15 06:32:20.060177 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.060180 | orchestrator | 2026-02-15 06:32:20.060184 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:32:20.060188 | orchestrator | Sunday 15 February 2026 06:32:14 +0000 (0:00:01.225) 0:38:52.480 ******* 2026-02-15 06:32:20.060191 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:32:20.060195 | orchestrator | 2026-02-15 06:32:20.060199 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:32:20.060202 | orchestrator | Sunday 15 February 2026 06:32:15 +0000 (0:00:01.258) 0:38:53.739 ******* 2026-02-15 06:32:20.060206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:32:20.060210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:32:20.060214 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:32:20.060217 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.060221 | orchestrator | 2026-02-15 06:32:20.060225 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:32:20.060228 | orchestrator | Sunday 15 February 2026 06:32:17 +0000 (0:00:01.447) 0:38:55.186 ******* 2026-02-15 06:32:20.060272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:32:20.060279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:32:20.060283 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:32:20.060288 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:32:20.060295 | orchestrator | 2026-02-15 06:32:20.060299 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:32:20.060304 | orchestrator | Sunday 15 February 2026 06:32:18 +0000 (0:00:01.498) 0:38:56.685 ******* 2026-02-15 06:32:20.060308 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:32:20.060313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:32:20.060320 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:33:20.154553 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.154672 | orchestrator | 2026-02-15 06:33:20.154690 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:33:20.154704 | orchestrator | Sunday 15 February 2026 06:32:20 +0000 (0:00:01.462) 0:38:58.148 ******* 2026-02-15 06:33:20.154715 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:33:20.154727 | orchestrator | 2026-02-15 06:33:20.154738 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:33:20.154749 | orchestrator | Sunday 15 February 2026 06:32:21 +0000 (0:00:01.191) 0:38:59.340 ******* 2026-02-15 06:33:20.154760 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 06:33:20.154771 | orchestrator | 2026-02-15 06:33:20.154782 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:33:20.154793 | orchestrator | Sunday 15 February 2026 06:32:23 +0000 (0:00:01.895) 0:39:01.236 ******* 2026-02-15 06:33:20.154803 | orchestrator | changed: [testbed-node-3] 2026-02-15 06:33:20.154814 | orchestrator | 2026-02-15 06:33:20.154825 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-15 06:33:20.154836 | orchestrator | Sunday 15 February 2026 06:32:24 +0000 (0:00:01.772) 0:39:03.009 ******* 2026-02-15 06:33:20.154847 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:33:20.154858 | orchestrator | 2026-02-15 06:33:20.154869 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:33:20.154879 | orchestrator | Sunday 15 February 2026 06:32:26 +0000 (0:00:01.183) 0:39:04.193 ******* 2026-02-15 06:33:20.154890 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:33:20.154916 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:33:20.154928 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:33:20.154939 | orchestrator | 2026-02-15 06:33:20.154949 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-15 06:33:20.154960 | orchestrator | Sunday 15 February 2026 06:32:27 +0000 (0:00:01.739) 0:39:05.932 ******* 2026-02-15 06:33:20.154970 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3 2026-02-15 06:33:20.154981 | orchestrator | 2026-02-15 06:33:20.154992 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-15 06:33:20.155002 | orchestrator | Sunday 15 February 2026 06:32:29 +0000 (0:00:01.476) 0:39:07.409 ******* 2026-02-15 06:33:20.155013 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.155023 | orchestrator | 2026-02-15 06:33:20.155034 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-15 06:33:20.155047 | orchestrator | Sunday 15 February 2026 06:32:30 +0000 (0:00:01.160) 0:39:08.569 ******* 2026-02-15 06:33:20.155058 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.155071 | orchestrator | 2026-02-15 06:33:20.155083 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-15 06:33:20.155096 | orchestrator | Sunday 15 February 2026 06:32:31 +0000 (0:00:01.140) 0:39:09.710 ******* 2026-02-15 06:33:20.155108 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:33:20.155121 | orchestrator | 2026-02-15 06:33:20.155133 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-15 06:33:20.155145 | orchestrator | Sunday 15 February 2026 06:32:33 +0000 (0:00:01.462) 0:39:11.172 ******* 2026-02-15 06:33:20.155158 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:33:20.155193 | orchestrator | 2026-02-15 06:33:20.155207 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-15 06:33:20.155219 | orchestrator | Sunday 15 February 2026 06:32:34 +0000 (0:00:01.175) 0:39:12.348 ******* 2026-02-15 06:33:20.155232 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-15 06:33:20.155245 | orchestrator | ok: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-15 06:33:20.155257 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-15 06:33:20.155269 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-15 06:33:20.155281 | orchestrator | ok: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-15 06:33:20.155294 | orchestrator | 2026-02-15 06:33:20.155306 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-15 06:33:20.155318 | orchestrator | Sunday 15 February 2026 06:32:37 +0000 (0:00:02.998) 0:39:15.346 ******* 2026-02-15 06:33:20.155330 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.155342 | orchestrator | 2026-02-15 06:33:20.155354 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-15 06:33:20.155365 | orchestrator | Sunday 15 February 2026 06:32:38 +0000 (0:00:01.135) 0:39:16.481 ******* 2026-02-15 06:33:20.155378 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3 2026-02-15 06:33:20.155390 | orchestrator | 2026-02-15 06:33:20.155401 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-15 06:33:20.155412 | orchestrator | Sunday 15 February 2026 06:32:39 +0000 (0:00:01.598) 0:39:18.081 ******* 2026-02-15 06:33:20.155423 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-15 06:33:20.155433 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-02-15 06:33:20.155444 | orchestrator | 2026-02-15 06:33:20.155454 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-15 06:33:20.155465 | orchestrator | Sunday 15 February 2026 06:32:41 +0000 (0:00:01.931) 0:39:20.012 ******* 2026-02-15 06:33:20.155476 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:33:20.155511 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 06:33:20.155524 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 06:33:20.155534 | orchestrator | 2026-02-15 06:33:20.155562 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-15 06:33:20.155573 | orchestrator | Sunday 15 February 2026 06:32:45 +0000 (0:00:03.136) 0:39:23.148 ******* 2026-02-15 06:33:20.155584 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-15 06:33:20.155594 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 06:33:20.155605 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:33:20.155615 | orchestrator | 2026-02-15 06:33:20.155626 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-15 06:33:20.155637 | orchestrator | Sunday 15 February 2026 06:32:47 +0000 (0:00:01.956) 0:39:25.105 ******* 2026-02-15 06:33:20.155647 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.155658 | orchestrator | 2026-02-15 06:33:20.155668 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-15 06:33:20.155679 | orchestrator | Sunday 15 February 2026 06:32:48 +0000 (0:00:01.229) 0:39:26.335 ******* 2026-02-15 06:33:20.155689 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.155700 | orchestrator | 2026-02-15 06:33:20.155710 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-15 06:33:20.155721 | orchestrator | Sunday 15 February 2026 06:32:49 +0000 (0:00:01.147) 0:39:27.482 ******* 2026-02-15 06:33:20.155731 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.155742 | orchestrator | 2026-02-15 06:33:20.155752 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-15 06:33:20.155763 | orchestrator | Sunday 15 February 2026 06:32:50 +0000 (0:00:01.166) 0:39:28.648 ******* 2026-02-15 06:33:20.155784 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3 2026-02-15 06:33:20.155795 | orchestrator | 2026-02-15 06:33:20.155805 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-15 06:33:20.155821 | orchestrator | Sunday 15 February 2026 06:32:52 +0000 (0:00:01.492) 0:39:30.141 ******* 2026-02-15 06:33:20.155832 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:33:20.155843 | orchestrator | 2026-02-15 06:33:20.155853 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-15 06:33:20.155864 | orchestrator | Sunday 15 February 2026 06:32:53 +0000 (0:00:01.533) 0:39:31.675 ******* 2026-02-15 06:33:20.155875 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:33:20.155885 | orchestrator | 2026-02-15 06:33:20.155896 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-15 06:33:20.155906 | orchestrator | Sunday 15 February 2026 06:32:56 +0000 (0:00:03.367) 0:39:35.042 ******* 2026-02-15 06:33:20.155917 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3 2026-02-15 06:33:20.155927 | orchestrator | 2026-02-15 06:33:20.155938 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-15 06:33:20.155948 | orchestrator | Sunday 15 February 2026 06:32:58 +0000 (0:00:01.502) 0:39:36.545 ******* 2026-02-15 06:33:20.155959 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:33:20.155969 | orchestrator | 2026-02-15 06:33:20.155980 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-15 06:33:20.155991 | orchestrator | Sunday 15 February 2026 06:33:00 +0000 (0:00:02.023) 0:39:38.568 ******* 2026-02-15 06:33:20.156001 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:33:20.156011 | orchestrator | 2026-02-15 06:33:20.156022 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-15 06:33:20.156033 | orchestrator | Sunday 15 February 2026 06:33:02 +0000 (0:00:01.991) 0:39:40.560 ******* 2026-02-15 06:33:20.156043 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:33:20.156054 | orchestrator | 2026-02-15 06:33:20.156064 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-15 06:33:20.156075 | orchestrator | Sunday 15 February 2026 06:33:04 +0000 (0:00:02.262) 0:39:42.822 ******* 2026-02-15 06:33:20.156085 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.156096 | orchestrator | 2026-02-15 06:33:20.156107 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-15 06:33:20.156117 | orchestrator | Sunday 15 February 2026 06:33:05 +0000 (0:00:01.160) 0:39:43.983 ******* 2026-02-15 06:33:20.156128 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.156138 | orchestrator | 2026-02-15 06:33:20.156149 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-15 06:33:20.156159 | orchestrator | Sunday 15 February 2026 06:33:07 +0000 (0:00:01.121) 0:39:45.106 ******* 2026-02-15 06:33:20.156170 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-15 06:33:20.156180 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 06:33:20.156191 | orchestrator | 2026-02-15 06:33:20.156201 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-15 06:33:20.156212 | orchestrator | Sunday 15 February 2026 06:33:08 +0000 (0:00:01.861) 0:39:46.967 ******* 2026-02-15 06:33:20.156222 | orchestrator | ok: [testbed-node-3] => (item=3) 2026-02-15 06:33:20.156233 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 06:33:20.156244 | orchestrator | 2026-02-15 06:33:20.156254 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-15 06:33:20.156265 | orchestrator | Sunday 15 February 2026 06:33:11 +0000 (0:00:02.896) 0:39:49.863 ******* 2026-02-15 06:33:20.156275 | orchestrator | changed: [testbed-node-3] => (item=3) 2026-02-15 06:33:20.156286 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-02-15 06:33:20.156296 | orchestrator | 2026-02-15 06:33:20.156307 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-15 06:33:20.156329 | orchestrator | Sunday 15 February 2026 06:33:16 +0000 (0:00:04.634) 0:39:54.498 ******* 2026-02-15 06:33:20.156347 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.156366 | orchestrator | 2026-02-15 06:33:20.156385 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-15 06:33:20.156404 | orchestrator | Sunday 15 February 2026 06:33:17 +0000 (0:00:01.248) 0:39:55.746 ******* 2026-02-15 06:33:20.156422 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.156438 | orchestrator | 2026-02-15 06:33:20.156449 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-15 06:33:20.156460 | orchestrator | Sunday 15 February 2026 06:33:18 +0000 (0:00:01.234) 0:39:56.980 ******* 2026-02-15 06:33:20.156471 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:33:20.156514 | orchestrator | 2026-02-15 06:33:20.156537 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-15 06:34:04.317644 | orchestrator | Sunday 15 February 2026 06:33:20 +0000 (0:00:01.264) 0:39:58.245 ******* 2026-02-15 06:34:04.317818 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:34:04.317849 | orchestrator | 2026-02-15 06:34:04.317870 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-15 06:34:04.317891 | orchestrator | Sunday 15 February 2026 06:33:21 +0000 (0:00:01.163) 0:39:59.409 ******* 2026-02-15 06:34:04.317924 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:34:04.317946 | orchestrator | 2026-02-15 06:34:04.317966 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-15 06:34:04.317987 | orchestrator | Sunday 15 February 2026 06:33:22 +0000 (0:00:01.154) 0:40:00.564 ******* 2026-02-15 06:34:04.318007 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-15 06:34:04.318087 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-15 06:34:04.318105 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-15 06:34:04.318128 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:34:04.318147 | orchestrator | 2026-02-15 06:34:04.318167 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 06:34:04.318187 | orchestrator | Sunday 15 February 2026 06:33:33 +0000 (0:00:11.146) 0:40:11.710 ******* 2026-02-15 06:34:04.318203 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:34:04.318220 | orchestrator | 2026-02-15 06:34:04.318257 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-15 06:34:04.318277 | orchestrator | Sunday 15 February 2026 06:33:34 +0000 (0:00:01.186) 0:40:12.896 ******* 2026-02-15 06:34:04.318294 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:34:04.318312 | orchestrator | 2026-02-15 06:34:04.318330 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-15 06:34:04.318350 | orchestrator | Sunday 15 February 2026 06:33:35 +0000 (0:00:01.191) 0:40:14.088 ******* 2026-02-15 06:34:04.318368 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:34:04.318386 | orchestrator | 2026-02-15 06:34:04.318403 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-15 06:34:04.318421 | orchestrator | Sunday 15 February 2026 06:33:37 +0000 (0:00:01.125) 0:40:15.214 ******* 2026-02-15 06:34:04.318440 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:34:04.318491 | orchestrator | 2026-02-15 06:34:04.318511 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-15 06:34:04.318528 | orchestrator | Sunday 15 February 2026 06:33:38 +0000 (0:00:01.129) 0:40:16.343 ******* 2026-02-15 06:34:04.318545 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:34:04.318562 | orchestrator | 2026-02-15 06:34:04.318579 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-15 06:34:04.318596 | orchestrator | Sunday 15 February 2026 06:33:39 +0000 (0:00:01.130) 0:40:17.473 ******* 2026-02-15 06:34:04.318614 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:34:04.318663 | orchestrator | 2026-02-15 06:34:04.318680 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-15 06:34:04.318697 | orchestrator | Sunday 15 February 2026 06:33:40 +0000 (0:00:01.130) 0:40:18.604 ******* 2026-02-15 06:34:04.318715 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:34:04.318730 | orchestrator | 2026-02-15 06:34:04.318747 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-15 06:34:04.318764 | orchestrator | 2026-02-15 06:34:04.318780 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:34:04.318797 | orchestrator | Sunday 15 February 2026 06:33:41 +0000 (0:00:01.036) 0:40:19.640 ******* 2026-02-15 06:34:04.318813 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-15 06:34:04.318830 | orchestrator | 2026-02-15 06:34:04.318846 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:34:04.318862 | orchestrator | Sunday 15 February 2026 06:33:42 +0000 (0:00:01.135) 0:40:20.776 ******* 2026-02-15 06:34:04.318879 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:04.318897 | orchestrator | 2026-02-15 06:34:04.318911 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:34:04.318927 | orchestrator | Sunday 15 February 2026 06:33:44 +0000 (0:00:01.509) 0:40:22.285 ******* 2026-02-15 06:34:04.318943 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:04.318959 | orchestrator | 2026-02-15 06:34:04.318975 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:34:04.318991 | orchestrator | Sunday 15 February 2026 06:33:45 +0000 (0:00:01.220) 0:40:23.506 ******* 2026-02-15 06:34:04.319007 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:04.319024 | orchestrator | 2026-02-15 06:34:04.319040 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:34:04.319056 | orchestrator | Sunday 15 February 2026 06:33:46 +0000 (0:00:01.450) 0:40:24.957 ******* 2026-02-15 06:34:04.319073 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:04.319090 | orchestrator | 2026-02-15 06:34:04.319105 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:34:04.319121 | orchestrator | Sunday 15 February 2026 06:33:48 +0000 (0:00:01.152) 0:40:26.110 ******* 2026-02-15 06:34:04.319137 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:04.319153 | orchestrator | 2026-02-15 06:34:04.319170 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:34:04.319186 | orchestrator | Sunday 15 February 2026 06:33:49 +0000 (0:00:01.119) 0:40:27.230 ******* 2026-02-15 06:34:04.319203 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:04.319219 | orchestrator | 2026-02-15 06:34:04.319236 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:34:04.319253 | orchestrator | Sunday 15 February 2026 06:33:50 +0000 (0:00:01.194) 0:40:28.425 ******* 2026-02-15 06:34:04.319269 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:04.319287 | orchestrator | 2026-02-15 06:34:04.319303 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:34:04.319345 | orchestrator | Sunday 15 February 2026 06:33:51 +0000 (0:00:01.133) 0:40:29.558 ******* 2026-02-15 06:34:04.319363 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:04.319378 | orchestrator | 2026-02-15 06:34:04.319392 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:34:04.319406 | orchestrator | Sunday 15 February 2026 06:33:52 +0000 (0:00:01.131) 0:40:30.689 ******* 2026-02-15 06:34:04.319420 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:34:04.319434 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:34:04.319447 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:34:04.319521 | orchestrator | 2026-02-15 06:34:04.319540 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:34:04.319555 | orchestrator | Sunday 15 February 2026 06:33:54 +0000 (0:00:02.193) 0:40:32.883 ******* 2026-02-15 06:34:04.319589 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:04.319608 | orchestrator | 2026-02-15 06:34:04.319626 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:34:04.319644 | orchestrator | Sunday 15 February 2026 06:33:56 +0000 (0:00:01.279) 0:40:34.163 ******* 2026-02-15 06:34:04.319659 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:34:04.319675 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:34:04.319702 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:34:04.319720 | orchestrator | 2026-02-15 06:34:04.319738 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:34:04.319757 | orchestrator | Sunday 15 February 2026 06:33:59 +0000 (0:00:03.247) 0:40:37.410 ******* 2026-02-15 06:34:04.319776 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-15 06:34:04.319795 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-15 06:34:04.319814 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-15 06:34:04.319834 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:04.319853 | orchestrator | 2026-02-15 06:34:04.319870 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:34:04.319887 | orchestrator | Sunday 15 February 2026 06:34:01 +0000 (0:00:01.731) 0:40:39.142 ******* 2026-02-15 06:34:04.319907 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:34:04.319927 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:34:04.319946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:34:04.319964 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:04.319983 | orchestrator | 2026-02-15 06:34:04.320001 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:34:04.320018 | orchestrator | Sunday 15 February 2026 06:34:03 +0000 (0:00:02.039) 0:40:41.182 ******* 2026-02-15 06:34:04.320038 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:04.320057 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:04.320077 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:04.320113 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:04.320133 | orchestrator | 2026-02-15 06:34:04.320150 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:34:04.320182 | orchestrator | Sunday 15 February 2026 06:34:04 +0000 (0:00:01.225) 0:40:42.408 ******* 2026-02-15 06:34:23.636825 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:33:56.597865', 'end': '2026-02-15 06:33:56.649061', 'delta': '0:00:00.051196', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:34:23.636996 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:33:57.499986', 'end': '2026-02-15 06:33:57.549463', 'delta': '0:00:00.049477', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:34:23.637015 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:33:58.052608', 'end': '2026-02-15 06:33:58.089901', 'delta': '0:00:00.037293', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:34:23.637029 | orchestrator | 2026-02-15 06:34:23.637043 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:34:23.637055 | orchestrator | Sunday 15 February 2026 06:34:05 +0000 (0:00:01.280) 0:40:43.688 ******* 2026-02-15 06:34:23.637066 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:23.637078 | orchestrator | 2026-02-15 06:34:23.637090 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:34:23.637100 | orchestrator | Sunday 15 February 2026 06:34:06 +0000 (0:00:01.295) 0:40:44.984 ******* 2026-02-15 06:34:23.637112 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:23.637124 | orchestrator | 2026-02-15 06:34:23.637135 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:34:23.637146 | orchestrator | Sunday 15 February 2026 06:34:08 +0000 (0:00:01.305) 0:40:46.290 ******* 2026-02-15 06:34:23.637156 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:23.637167 | orchestrator | 2026-02-15 06:34:23.637178 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:34:23.637189 | orchestrator | Sunday 15 February 2026 06:34:09 +0000 (0:00:01.219) 0:40:47.509 ******* 2026-02-15 06:34:23.637200 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:34:23.637211 | orchestrator | 2026-02-15 06:34:23.637222 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:34:23.637263 | orchestrator | Sunday 15 February 2026 06:34:11 +0000 (0:00:02.049) 0:40:49.559 ******* 2026-02-15 06:34:23.637275 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:23.637285 | orchestrator | 2026-02-15 06:34:23.637296 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:34:23.637309 | orchestrator | Sunday 15 February 2026 06:34:12 +0000 (0:00:01.145) 0:40:50.704 ******* 2026-02-15 06:34:23.637322 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:23.637334 | orchestrator | 2026-02-15 06:34:23.637346 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:34:23.637359 | orchestrator | Sunday 15 February 2026 06:34:13 +0000 (0:00:01.117) 0:40:51.822 ******* 2026-02-15 06:34:23.637371 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:23.637384 | orchestrator | 2026-02-15 06:34:23.637396 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:34:23.637408 | orchestrator | Sunday 15 February 2026 06:34:15 +0000 (0:00:01.288) 0:40:53.110 ******* 2026-02-15 06:34:23.637419 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:23.637430 | orchestrator | 2026-02-15 06:34:23.637441 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:34:23.637481 | orchestrator | Sunday 15 February 2026 06:34:16 +0000 (0:00:01.154) 0:40:54.265 ******* 2026-02-15 06:34:23.637493 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:23.637504 | orchestrator | 2026-02-15 06:34:23.637535 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:34:23.637546 | orchestrator | Sunday 15 February 2026 06:34:17 +0000 (0:00:01.128) 0:40:55.394 ******* 2026-02-15 06:34:23.637557 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:23.637568 | orchestrator | 2026-02-15 06:34:23.637579 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:34:23.637590 | orchestrator | Sunday 15 February 2026 06:34:18 +0000 (0:00:01.216) 0:40:56.611 ******* 2026-02-15 06:34:23.637601 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:23.637612 | orchestrator | 2026-02-15 06:34:23.637623 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:34:23.637634 | orchestrator | Sunday 15 February 2026 06:34:19 +0000 (0:00:01.163) 0:40:57.775 ******* 2026-02-15 06:34:23.637644 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:23.637655 | orchestrator | 2026-02-15 06:34:23.637666 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:34:23.637676 | orchestrator | Sunday 15 February 2026 06:34:20 +0000 (0:00:01.307) 0:40:59.082 ******* 2026-02-15 06:34:23.637687 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:23.637697 | orchestrator | 2026-02-15 06:34:23.637708 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:34:23.637720 | orchestrator | Sunday 15 February 2026 06:34:22 +0000 (0:00:01.139) 0:41:00.221 ******* 2026-02-15 06:34:23.637730 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:23.637741 | orchestrator | 2026-02-15 06:34:23.637751 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:34:23.637768 | orchestrator | Sunday 15 February 2026 06:34:23 +0000 (0:00:01.229) 0:41:01.451 ******* 2026-02-15 06:34:23.637782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:34:23.637797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'uuids': ['65aea23d-0c6f-484a-a24c-521c476a1576'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k']}})  2026-02-15 06:34:23.637819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7cc59cd1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:34:23.637833 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800']}})  2026-02-15 06:34:23.637853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:34:25.024931 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:34:25.025054 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:34:25.025090 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:34:25.025101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24', 'dm-uuid-CRYPT-LUKS2-d6fb5e45582d485d831faba7ab4bd3c7-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:34:25.025133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:34:25.025143 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'uuids': ['d6fb5e45-582d-485d-831f-aba7ab4bd3c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24']}})  2026-02-15 06:34:25.025155 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee']}})  2026-02-15 06:34:25.025183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:34:25.025202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7713f0f4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:34:25.025221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:34:25.025231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:34:25.025240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k', 'dm-uuid-CRYPT-LUKS2-65aea23d0c6f484aa24c521c476a1576-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:34:25.025251 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:25.025263 | orchestrator | 2026-02-15 06:34:25.025273 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:34:25.025283 | orchestrator | Sunday 15 February 2026 06:34:24 +0000 (0:00:01.452) 0:41:02.903 ******* 2026-02-15 06:34:25.025300 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248520 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'uuids': ['65aea23d-0c6f-484a-a24c-521c476a1576'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248652 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7cc59cd1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248669 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248684 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248697 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248732 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248753 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248764 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24', 'dm-uuid-CRYPT-LUKS2-d6fb5e45582d485d831faba7ab4bd3c7-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248776 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248787 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'uuids': ['d6fb5e45-582d-485d-831f-aba7ab4bd3c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:26.248806 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:44.841774 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:44.841896 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7713f0f4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:44.841915 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:44.841946 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:44.841986 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k', 'dm-uuid-CRYPT-LUKS2-65aea23d0c6f484aa24c521c476a1576-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:34:44.842000 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:44.842072 | orchestrator | 2026-02-15 06:34:44.842087 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:34:44.842099 | orchestrator | Sunday 15 February 2026 06:34:26 +0000 (0:00:01.444) 0:41:04.348 ******* 2026-02-15 06:34:44.842110 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:44.842121 | orchestrator | 2026-02-15 06:34:44.842132 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:34:44.842153 | orchestrator | Sunday 15 February 2026 06:34:27 +0000 (0:00:01.533) 0:41:05.881 ******* 2026-02-15 06:34:44.842164 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:44.842174 | orchestrator | 2026-02-15 06:34:44.842185 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:34:44.842196 | orchestrator | Sunday 15 February 2026 06:34:28 +0000 (0:00:01.142) 0:41:07.024 ******* 2026-02-15 06:34:44.842206 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:44.842217 | orchestrator | 2026-02-15 06:34:44.842228 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:34:44.842238 | orchestrator | Sunday 15 February 2026 06:34:30 +0000 (0:00:01.490) 0:41:08.515 ******* 2026-02-15 06:34:44.842249 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:44.842260 | orchestrator | 2026-02-15 06:34:44.842270 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:34:44.842281 | orchestrator | Sunday 15 February 2026 06:34:31 +0000 (0:00:01.176) 0:41:09.692 ******* 2026-02-15 06:34:44.842293 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:44.842306 | orchestrator | 2026-02-15 06:34:44.842318 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:34:44.842330 | orchestrator | Sunday 15 February 2026 06:34:32 +0000 (0:00:01.249) 0:41:10.941 ******* 2026-02-15 06:34:44.842343 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:44.842355 | orchestrator | 2026-02-15 06:34:44.842367 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:34:44.842379 | orchestrator | Sunday 15 February 2026 06:34:34 +0000 (0:00:01.217) 0:41:12.158 ******* 2026-02-15 06:34:44.842391 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-15 06:34:44.842404 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-15 06:34:44.842416 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-15 06:34:44.842428 | orchestrator | 2026-02-15 06:34:44.842515 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:34:44.842531 | orchestrator | Sunday 15 February 2026 06:34:36 +0000 (0:00:02.074) 0:41:14.233 ******* 2026-02-15 06:34:44.842544 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-15 06:34:44.842556 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-15 06:34:44.842569 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-15 06:34:44.842582 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:44.842594 | orchestrator | 2026-02-15 06:34:44.842607 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:34:44.842630 | orchestrator | Sunday 15 February 2026 06:34:37 +0000 (0:00:01.225) 0:41:15.459 ******* 2026-02-15 06:34:44.842642 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-15 06:34:44.842655 | orchestrator | 2026-02-15 06:34:44.842667 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:34:44.842679 | orchestrator | Sunday 15 February 2026 06:34:38 +0000 (0:00:01.334) 0:41:16.794 ******* 2026-02-15 06:34:44.842690 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:44.842701 | orchestrator | 2026-02-15 06:34:44.842712 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:34:44.842722 | orchestrator | Sunday 15 February 2026 06:34:39 +0000 (0:00:01.131) 0:41:17.925 ******* 2026-02-15 06:34:44.842733 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:44.842744 | orchestrator | 2026-02-15 06:34:44.842754 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:34:44.842765 | orchestrator | Sunday 15 February 2026 06:34:40 +0000 (0:00:01.154) 0:41:19.079 ******* 2026-02-15 06:34:44.842776 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:34:44.842786 | orchestrator | 2026-02-15 06:34:44.842797 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:34:44.842808 | orchestrator | Sunday 15 February 2026 06:34:42 +0000 (0:00:01.213) 0:41:20.293 ******* 2026-02-15 06:34:44.842818 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:34:44.842829 | orchestrator | 2026-02-15 06:34:44.842839 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:34:44.842850 | orchestrator | Sunday 15 February 2026 06:34:43 +0000 (0:00:01.224) 0:41:21.518 ******* 2026-02-15 06:34:44.842871 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:35:25.677065 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:35:25.677204 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:35:25.677222 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.677234 | orchestrator | 2026-02-15 06:35:25.677246 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:35:25.677258 | orchestrator | Sunday 15 February 2026 06:34:44 +0000 (0:00:01.416) 0:41:22.934 ******* 2026-02-15 06:35:25.677269 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:35:25.677281 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:35:25.677291 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:35:25.677302 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.677313 | orchestrator | 2026-02-15 06:35:25.677324 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:35:25.677335 | orchestrator | Sunday 15 February 2026 06:34:46 +0000 (0:00:01.425) 0:41:24.360 ******* 2026-02-15 06:35:25.677346 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:35:25.677357 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:35:25.677367 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:35:25.677378 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.677389 | orchestrator | 2026-02-15 06:35:25.677399 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:35:25.677410 | orchestrator | Sunday 15 February 2026 06:34:47 +0000 (0:00:01.438) 0:41:25.798 ******* 2026-02-15 06:35:25.677489 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.677503 | orchestrator | 2026-02-15 06:35:25.677514 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:35:25.677525 | orchestrator | Sunday 15 February 2026 06:34:48 +0000 (0:00:01.183) 0:41:26.981 ******* 2026-02-15 06:35:25.677536 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-15 06:35:25.677547 | orchestrator | 2026-02-15 06:35:25.677560 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:35:25.677609 | orchestrator | Sunday 15 February 2026 06:34:50 +0000 (0:00:01.344) 0:41:28.325 ******* 2026-02-15 06:35:25.677630 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:35:25.677650 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:35:25.677670 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:35:25.677690 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:35:25.677708 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-15 06:35:25.677725 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:35:25.677739 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:35:25.677751 | orchestrator | 2026-02-15 06:35:25.677764 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:35:25.677776 | orchestrator | Sunday 15 February 2026 06:34:52 +0000 (0:00:02.241) 0:41:30.567 ******* 2026-02-15 06:35:25.677788 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:35:25.677801 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:35:25.677813 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:35:25.677825 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:35:25.677837 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-15 06:35:25.677848 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:35:25.677861 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:35:25.677872 | orchestrator | 2026-02-15 06:35:25.677885 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-15 06:35:25.677897 | orchestrator | Sunday 15 February 2026 06:34:54 +0000 (0:00:02.528) 0:41:33.096 ******* 2026-02-15 06:35:25.677913 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.677932 | orchestrator | 2026-02-15 06:35:25.677945 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-15 06:35:25.677958 | orchestrator | Sunday 15 February 2026 06:34:56 +0000 (0:00:01.120) 0:41:34.217 ******* 2026-02-15 06:35:25.677970 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.677981 | orchestrator | 2026-02-15 06:35:25.677992 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-15 06:35:25.678002 | orchestrator | Sunday 15 February 2026 06:34:56 +0000 (0:00:00.787) 0:41:35.004 ******* 2026-02-15 06:35:25.678013 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.678107 | orchestrator | 2026-02-15 06:35:25.678119 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-15 06:35:25.678130 | orchestrator | Sunday 15 February 2026 06:34:57 +0000 (0:00:00.930) 0:41:35.935 ******* 2026-02-15 06:35:25.678141 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-15 06:35:25.678152 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-15 06:35:25.678163 | orchestrator | 2026-02-15 06:35:25.678173 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:35:25.678184 | orchestrator | Sunday 15 February 2026 06:35:01 +0000 (0:00:03.800) 0:41:39.736 ******* 2026-02-15 06:35:25.678195 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-15 06:35:25.678206 | orchestrator | 2026-02-15 06:35:25.678217 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:35:25.678249 | orchestrator | Sunday 15 February 2026 06:35:02 +0000 (0:00:01.170) 0:41:40.906 ******* 2026-02-15 06:35:25.678269 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-15 06:35:25.678291 | orchestrator | 2026-02-15 06:35:25.678302 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:35:25.678313 | orchestrator | Sunday 15 February 2026 06:35:03 +0000 (0:00:01.166) 0:41:42.073 ******* 2026-02-15 06:35:25.678324 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.678335 | orchestrator | 2026-02-15 06:35:25.678345 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:35:25.678356 | orchestrator | Sunday 15 February 2026 06:35:05 +0000 (0:00:01.154) 0:41:43.228 ******* 2026-02-15 06:35:25.678367 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.678377 | orchestrator | 2026-02-15 06:35:25.678388 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:35:25.678399 | orchestrator | Sunday 15 February 2026 06:35:06 +0000 (0:00:01.539) 0:41:44.767 ******* 2026-02-15 06:35:25.678409 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.678457 | orchestrator | 2026-02-15 06:35:25.678469 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:35:25.678480 | orchestrator | Sunday 15 February 2026 06:35:08 +0000 (0:00:01.541) 0:41:46.308 ******* 2026-02-15 06:35:25.678491 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.678501 | orchestrator | 2026-02-15 06:35:25.678512 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:35:25.678523 | orchestrator | Sunday 15 February 2026 06:35:09 +0000 (0:00:01.538) 0:41:47.847 ******* 2026-02-15 06:35:25.678533 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.678544 | orchestrator | 2026-02-15 06:35:25.678555 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:35:25.678565 | orchestrator | Sunday 15 February 2026 06:35:10 +0000 (0:00:01.231) 0:41:49.078 ******* 2026-02-15 06:35:25.678576 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.678586 | orchestrator | 2026-02-15 06:35:25.678596 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:35:25.678607 | orchestrator | Sunday 15 February 2026 06:35:12 +0000 (0:00:01.188) 0:41:50.267 ******* 2026-02-15 06:35:25.678621 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.678641 | orchestrator | 2026-02-15 06:35:25.678658 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:35:25.678678 | orchestrator | Sunday 15 February 2026 06:35:13 +0000 (0:00:01.166) 0:41:51.433 ******* 2026-02-15 06:35:25.678699 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.678720 | orchestrator | 2026-02-15 06:35:25.678737 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:35:25.678748 | orchestrator | Sunday 15 February 2026 06:35:14 +0000 (0:00:01.580) 0:41:53.013 ******* 2026-02-15 06:35:25.678759 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.678769 | orchestrator | 2026-02-15 06:35:25.678780 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:35:25.678791 | orchestrator | Sunday 15 February 2026 06:35:16 +0000 (0:00:01.533) 0:41:54.548 ******* 2026-02-15 06:35:25.678802 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.678812 | orchestrator | 2026-02-15 06:35:25.678823 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:35:25.678834 | orchestrator | Sunday 15 February 2026 06:35:17 +0000 (0:00:00.768) 0:41:55.316 ******* 2026-02-15 06:35:25.678844 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.678855 | orchestrator | 2026-02-15 06:35:25.678866 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:35:25.678876 | orchestrator | Sunday 15 February 2026 06:35:17 +0000 (0:00:00.765) 0:41:56.082 ******* 2026-02-15 06:35:25.678887 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.678897 | orchestrator | 2026-02-15 06:35:25.678908 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:35:25.678919 | orchestrator | Sunday 15 February 2026 06:35:18 +0000 (0:00:00.837) 0:41:56.919 ******* 2026-02-15 06:35:25.678929 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.678949 | orchestrator | 2026-02-15 06:35:25.678960 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:35:25.678971 | orchestrator | Sunday 15 February 2026 06:35:19 +0000 (0:00:00.876) 0:41:57.796 ******* 2026-02-15 06:35:25.678982 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.678992 | orchestrator | 2026-02-15 06:35:25.679003 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:35:25.679014 | orchestrator | Sunday 15 February 2026 06:35:20 +0000 (0:00:00.783) 0:41:58.579 ******* 2026-02-15 06:35:25.679024 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.679035 | orchestrator | 2026-02-15 06:35:25.679045 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:35:25.679056 | orchestrator | Sunday 15 February 2026 06:35:21 +0000 (0:00:00.830) 0:41:59.410 ******* 2026-02-15 06:35:25.679067 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.679077 | orchestrator | 2026-02-15 06:35:25.679088 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:35:25.679099 | orchestrator | Sunday 15 February 2026 06:35:22 +0000 (0:00:00.802) 0:42:00.212 ******* 2026-02-15 06:35:25.679109 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:35:25.679120 | orchestrator | 2026-02-15 06:35:25.679131 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:35:25.679141 | orchestrator | Sunday 15 February 2026 06:35:22 +0000 (0:00:00.771) 0:42:00.983 ******* 2026-02-15 06:35:25.679152 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.679162 | orchestrator | 2026-02-15 06:35:25.679173 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:35:25.679184 | orchestrator | Sunday 15 February 2026 06:35:23 +0000 (0:00:00.902) 0:42:01.886 ******* 2026-02-15 06:35:25.679194 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:35:25.679205 | orchestrator | 2026-02-15 06:35:25.679215 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:35:25.679226 | orchestrator | Sunday 15 February 2026 06:35:24 +0000 (0:00:01.046) 0:42:02.932 ******* 2026-02-15 06:35:25.679245 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.526712 | orchestrator | 2026-02-15 06:36:08.526819 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:36:08.526832 | orchestrator | Sunday 15 February 2026 06:35:25 +0000 (0:00:00.839) 0:42:03.772 ******* 2026-02-15 06:36:08.526839 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.526848 | orchestrator | 2026-02-15 06:36:08.526855 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:36:08.526862 | orchestrator | Sunday 15 February 2026 06:35:26 +0000 (0:00:00.795) 0:42:04.567 ******* 2026-02-15 06:36:08.526869 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.526875 | orchestrator | 2026-02-15 06:36:08.526882 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:36:08.526888 | orchestrator | Sunday 15 February 2026 06:35:27 +0000 (0:00:00.777) 0:42:05.345 ******* 2026-02-15 06:36:08.526895 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.526902 | orchestrator | 2026-02-15 06:36:08.526908 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:36:08.526915 | orchestrator | Sunday 15 February 2026 06:35:28 +0000 (0:00:00.774) 0:42:06.119 ******* 2026-02-15 06:36:08.526921 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.526928 | orchestrator | 2026-02-15 06:36:08.526934 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:36:08.526942 | orchestrator | Sunday 15 February 2026 06:35:28 +0000 (0:00:00.761) 0:42:06.881 ******* 2026-02-15 06:36:08.526948 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.526955 | orchestrator | 2026-02-15 06:36:08.526961 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:36:08.526968 | orchestrator | Sunday 15 February 2026 06:35:29 +0000 (0:00:00.773) 0:42:07.655 ******* 2026-02-15 06:36:08.526975 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.526999 | orchestrator | 2026-02-15 06:36:08.527007 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:36:08.527014 | orchestrator | Sunday 15 February 2026 06:35:30 +0000 (0:00:00.792) 0:42:08.447 ******* 2026-02-15 06:36:08.527021 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527027 | orchestrator | 2026-02-15 06:36:08.527034 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:36:08.527040 | orchestrator | Sunday 15 February 2026 06:35:31 +0000 (0:00:00.761) 0:42:09.209 ******* 2026-02-15 06:36:08.527046 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527053 | orchestrator | 2026-02-15 06:36:08.527059 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:36:08.527066 | orchestrator | Sunday 15 February 2026 06:35:31 +0000 (0:00:00.782) 0:42:09.991 ******* 2026-02-15 06:36:08.527072 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527079 | orchestrator | 2026-02-15 06:36:08.527085 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:36:08.527092 | orchestrator | Sunday 15 February 2026 06:35:32 +0000 (0:00:00.763) 0:42:10.754 ******* 2026-02-15 06:36:08.527098 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527105 | orchestrator | 2026-02-15 06:36:08.527111 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:36:08.527118 | orchestrator | Sunday 15 February 2026 06:35:33 +0000 (0:00:00.817) 0:42:11.572 ******* 2026-02-15 06:36:08.527124 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527131 | orchestrator | 2026-02-15 06:36:08.527137 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:36:08.527144 | orchestrator | Sunday 15 February 2026 06:35:34 +0000 (0:00:00.962) 0:42:12.535 ******* 2026-02-15 06:36:08.527150 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:36:08.527158 | orchestrator | 2026-02-15 06:36:08.527164 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:36:08.527171 | orchestrator | Sunday 15 February 2026 06:35:36 +0000 (0:00:01.590) 0:42:14.125 ******* 2026-02-15 06:36:08.527177 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:36:08.527183 | orchestrator | 2026-02-15 06:36:08.527190 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:36:08.527196 | orchestrator | Sunday 15 February 2026 06:35:37 +0000 (0:00:01.853) 0:42:15.979 ******* 2026-02-15 06:36:08.527203 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-15 06:36:08.527210 | orchestrator | 2026-02-15 06:36:08.527217 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:36:08.527223 | orchestrator | Sunday 15 February 2026 06:35:39 +0000 (0:00:01.212) 0:42:17.192 ******* 2026-02-15 06:36:08.527230 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527236 | orchestrator | 2026-02-15 06:36:08.527243 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:36:08.527249 | orchestrator | Sunday 15 February 2026 06:35:40 +0000 (0:00:01.136) 0:42:18.328 ******* 2026-02-15 06:36:08.527256 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527262 | orchestrator | 2026-02-15 06:36:08.527269 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:36:08.527277 | orchestrator | Sunday 15 February 2026 06:35:41 +0000 (0:00:01.283) 0:42:19.612 ******* 2026-02-15 06:36:08.527285 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:36:08.527293 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:36:08.527301 | orchestrator | 2026-02-15 06:36:08.527309 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:36:08.527316 | orchestrator | Sunday 15 February 2026 06:35:43 +0000 (0:00:01.840) 0:42:21.453 ******* 2026-02-15 06:36:08.527324 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:36:08.527332 | orchestrator | 2026-02-15 06:36:08.527340 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:36:08.527354 | orchestrator | Sunday 15 February 2026 06:35:44 +0000 (0:00:01.467) 0:42:22.920 ******* 2026-02-15 06:36:08.527362 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527370 | orchestrator | 2026-02-15 06:36:08.527389 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:36:08.527422 | orchestrator | Sunday 15 February 2026 06:35:45 +0000 (0:00:01.177) 0:42:24.098 ******* 2026-02-15 06:36:08.527430 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527438 | orchestrator | 2026-02-15 06:36:08.527445 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:36:08.527453 | orchestrator | Sunday 15 February 2026 06:35:46 +0000 (0:00:00.795) 0:42:24.894 ******* 2026-02-15 06:36:08.527461 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527468 | orchestrator | 2026-02-15 06:36:08.527476 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:36:08.527484 | orchestrator | Sunday 15 February 2026 06:35:47 +0000 (0:00:00.781) 0:42:25.675 ******* 2026-02-15 06:36:08.527491 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-15 06:36:08.527499 | orchestrator | 2026-02-15 06:36:08.527506 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:36:08.527514 | orchestrator | Sunday 15 February 2026 06:35:48 +0000 (0:00:01.269) 0:42:26.945 ******* 2026-02-15 06:36:08.527522 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:36:08.527529 | orchestrator | 2026-02-15 06:36:08.527537 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:36:08.527545 | orchestrator | Sunday 15 February 2026 06:35:50 +0000 (0:00:01.719) 0:42:28.665 ******* 2026-02-15 06:36:08.527552 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:36:08.527560 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:36:08.527567 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:36:08.527575 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527582 | orchestrator | 2026-02-15 06:36:08.527590 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:36:08.527598 | orchestrator | Sunday 15 February 2026 06:35:51 +0000 (0:00:01.163) 0:42:29.829 ******* 2026-02-15 06:36:08.527605 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527613 | orchestrator | 2026-02-15 06:36:08.527621 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:36:08.527629 | orchestrator | Sunday 15 February 2026 06:35:52 +0000 (0:00:01.117) 0:42:30.947 ******* 2026-02-15 06:36:08.527636 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527644 | orchestrator | 2026-02-15 06:36:08.527651 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:36:08.527657 | orchestrator | Sunday 15 February 2026 06:35:54 +0000 (0:00:01.194) 0:42:32.142 ******* 2026-02-15 06:36:08.527664 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527670 | orchestrator | 2026-02-15 06:36:08.527677 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:36:08.527683 | orchestrator | Sunday 15 February 2026 06:35:55 +0000 (0:00:01.221) 0:42:33.364 ******* 2026-02-15 06:36:08.527690 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527696 | orchestrator | 2026-02-15 06:36:08.527704 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:36:08.527714 | orchestrator | Sunday 15 February 2026 06:35:56 +0000 (0:00:01.190) 0:42:34.554 ******* 2026-02-15 06:36:08.527725 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527736 | orchestrator | 2026-02-15 06:36:08.527746 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:36:08.527762 | orchestrator | Sunday 15 February 2026 06:35:57 +0000 (0:00:00.803) 0:42:35.358 ******* 2026-02-15 06:36:08.527782 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:36:08.527793 | orchestrator | 2026-02-15 06:36:08.527804 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:36:08.527814 | orchestrator | Sunday 15 February 2026 06:35:59 +0000 (0:00:02.129) 0:42:37.487 ******* 2026-02-15 06:36:08.527830 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:36:08.527843 | orchestrator | 2026-02-15 06:36:08.527854 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:36:08.527865 | orchestrator | Sunday 15 February 2026 06:36:00 +0000 (0:00:00.792) 0:42:38.280 ******* 2026-02-15 06:36:08.527875 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-15 06:36:08.527887 | orchestrator | 2026-02-15 06:36:08.527897 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:36:08.527907 | orchestrator | Sunday 15 February 2026 06:36:01 +0000 (0:00:01.149) 0:42:39.430 ******* 2026-02-15 06:36:08.527918 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527928 | orchestrator | 2026-02-15 06:36:08.527940 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:36:08.527947 | orchestrator | Sunday 15 February 2026 06:36:02 +0000 (0:00:01.235) 0:42:40.666 ******* 2026-02-15 06:36:08.527953 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527960 | orchestrator | 2026-02-15 06:36:08.527966 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:36:08.527973 | orchestrator | Sunday 15 February 2026 06:36:03 +0000 (0:00:01.228) 0:42:41.894 ******* 2026-02-15 06:36:08.527980 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.527986 | orchestrator | 2026-02-15 06:36:08.527992 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:36:08.527999 | orchestrator | Sunday 15 February 2026 06:36:04 +0000 (0:00:01.201) 0:42:43.095 ******* 2026-02-15 06:36:08.528005 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.528012 | orchestrator | 2026-02-15 06:36:08.528018 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:36:08.528025 | orchestrator | Sunday 15 February 2026 06:36:06 +0000 (0:00:01.195) 0:42:44.291 ******* 2026-02-15 06:36:08.528031 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:08.528038 | orchestrator | 2026-02-15 06:36:08.528044 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:36:08.528051 | orchestrator | Sunday 15 February 2026 06:36:07 +0000 (0:00:01.152) 0:42:45.443 ******* 2026-02-15 06:36:08.528075 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.931661 | orchestrator | 2026-02-15 06:36:50.931756 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:36:50.931763 | orchestrator | Sunday 15 February 2026 06:36:08 +0000 (0:00:01.176) 0:42:46.620 ******* 2026-02-15 06:36:50.931767 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.931772 | orchestrator | 2026-02-15 06:36:50.931776 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:36:50.931780 | orchestrator | Sunday 15 February 2026 06:36:09 +0000 (0:00:01.187) 0:42:47.807 ******* 2026-02-15 06:36:50.931784 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.931788 | orchestrator | 2026-02-15 06:36:50.931792 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:36:50.931796 | orchestrator | Sunday 15 February 2026 06:36:10 +0000 (0:00:01.168) 0:42:48.976 ******* 2026-02-15 06:36:50.931799 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:36:50.931804 | orchestrator | 2026-02-15 06:36:50.931808 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:36:50.931811 | orchestrator | Sunday 15 February 2026 06:36:11 +0000 (0:00:00.818) 0:42:49.795 ******* 2026-02-15 06:36:50.931815 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-15 06:36:50.931819 | orchestrator | 2026-02-15 06:36:50.931823 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:36:50.931842 | orchestrator | Sunday 15 February 2026 06:36:12 +0000 (0:00:01.143) 0:42:50.938 ******* 2026-02-15 06:36:50.931846 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-15 06:36:50.931850 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-15 06:36:50.931854 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-15 06:36:50.931857 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-15 06:36:50.931861 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-15 06:36:50.931865 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-15 06:36:50.931868 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-15 06:36:50.931872 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:36:50.931876 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:36:50.931880 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:36:50.931883 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:36:50.931888 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:36:50.931891 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:36:50.931895 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:36:50.931899 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-15 06:36:50.931903 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-15 06:36:50.931907 | orchestrator | 2026-02-15 06:36:50.931910 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:36:50.931914 | orchestrator | Sunday 15 February 2026 06:36:19 +0000 (0:00:06.214) 0:42:57.153 ******* 2026-02-15 06:36:50.931918 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-15 06:36:50.931921 | orchestrator | 2026-02-15 06:36:50.931925 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-15 06:36:50.931929 | orchestrator | Sunday 15 February 2026 06:36:20 +0000 (0:00:01.465) 0:42:58.619 ******* 2026-02-15 06:36:50.931932 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 06:36:50.931938 | orchestrator | 2026-02-15 06:36:50.931941 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-15 06:36:50.931945 | orchestrator | Sunday 15 February 2026 06:36:21 +0000 (0:00:01.473) 0:43:00.092 ******* 2026-02-15 06:36:50.931949 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 06:36:50.931952 | orchestrator | 2026-02-15 06:36:50.931956 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:36:50.931960 | orchestrator | Sunday 15 February 2026 06:36:23 +0000 (0:00:01.625) 0:43:01.718 ******* 2026-02-15 06:36:50.931963 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.931967 | orchestrator | 2026-02-15 06:36:50.931970 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:36:50.931974 | orchestrator | Sunday 15 February 2026 06:36:24 +0000 (0:00:00.822) 0:43:02.541 ******* 2026-02-15 06:36:50.931978 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.931981 | orchestrator | 2026-02-15 06:36:50.931985 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:36:50.931989 | orchestrator | Sunday 15 February 2026 06:36:25 +0000 (0:00:00.846) 0:43:03.388 ******* 2026-02-15 06:36:50.931992 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.931996 | orchestrator | 2026-02-15 06:36:50.932000 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:36:50.932003 | orchestrator | Sunday 15 February 2026 06:36:26 +0000 (0:00:00.779) 0:43:04.168 ******* 2026-02-15 06:36:50.932007 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932017 | orchestrator | 2026-02-15 06:36:50.932020 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:36:50.932024 | orchestrator | Sunday 15 February 2026 06:36:26 +0000 (0:00:00.794) 0:43:04.963 ******* 2026-02-15 06:36:50.932028 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932031 | orchestrator | 2026-02-15 06:36:50.932035 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:36:50.932039 | orchestrator | Sunday 15 February 2026 06:36:27 +0000 (0:00:00.762) 0:43:05.726 ******* 2026-02-15 06:36:50.932052 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932056 | orchestrator | 2026-02-15 06:36:50.932062 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:36:50.932066 | orchestrator | Sunday 15 February 2026 06:36:28 +0000 (0:00:00.833) 0:43:06.560 ******* 2026-02-15 06:36:50.932070 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932074 | orchestrator | 2026-02-15 06:36:50.932077 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:36:50.932081 | orchestrator | Sunday 15 February 2026 06:36:29 +0000 (0:00:00.777) 0:43:07.337 ******* 2026-02-15 06:36:50.932085 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932089 | orchestrator | 2026-02-15 06:36:50.932092 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:36:50.932096 | orchestrator | Sunday 15 February 2026 06:36:30 +0000 (0:00:00.820) 0:43:08.158 ******* 2026-02-15 06:36:50.932100 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932104 | orchestrator | 2026-02-15 06:36:50.932108 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:36:50.932112 | orchestrator | Sunday 15 February 2026 06:36:30 +0000 (0:00:00.796) 0:43:08.955 ******* 2026-02-15 06:36:50.932115 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932119 | orchestrator | 2026-02-15 06:36:50.932123 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:36:50.932127 | orchestrator | Sunday 15 February 2026 06:36:31 +0000 (0:00:00.794) 0:43:09.749 ******* 2026-02-15 06:36:50.932130 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:36:50.932134 | orchestrator | 2026-02-15 06:36:50.932138 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:36:50.932142 | orchestrator | Sunday 15 February 2026 06:36:32 +0000 (0:00:00.908) 0:43:10.658 ******* 2026-02-15 06:36:50.932145 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-15 06:36:50.932149 | orchestrator | 2026-02-15 06:36:50.932153 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:36:50.932156 | orchestrator | Sunday 15 February 2026 06:36:36 +0000 (0:00:04.090) 0:43:14.749 ******* 2026-02-15 06:36:50.932160 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 06:36:50.932164 | orchestrator | 2026-02-15 06:36:50.932168 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:36:50.932171 | orchestrator | Sunday 15 February 2026 06:36:37 +0000 (0:00:00.862) 0:43:15.611 ******* 2026-02-15 06:36:50.932177 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-15 06:36:50.932183 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-15 06:36:50.932189 | orchestrator | 2026-02-15 06:36:50.932196 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:36:50.932200 | orchestrator | Sunday 15 February 2026 06:36:44 +0000 (0:00:07.332) 0:43:22.944 ******* 2026-02-15 06:36:50.932203 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932207 | orchestrator | 2026-02-15 06:36:50.932211 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:36:50.932215 | orchestrator | Sunday 15 February 2026 06:36:45 +0000 (0:00:00.826) 0:43:23.770 ******* 2026-02-15 06:36:50.932218 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932222 | orchestrator | 2026-02-15 06:36:50.932226 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:36:50.932229 | orchestrator | Sunday 15 February 2026 06:36:46 +0000 (0:00:00.816) 0:43:24.586 ******* 2026-02-15 06:36:50.932233 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932238 | orchestrator | 2026-02-15 06:36:50.932242 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:36:50.932247 | orchestrator | Sunday 15 February 2026 06:36:47 +0000 (0:00:00.822) 0:43:25.409 ******* 2026-02-15 06:36:50.932251 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932255 | orchestrator | 2026-02-15 06:36:50.932260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:36:50.932264 | orchestrator | Sunday 15 February 2026 06:36:48 +0000 (0:00:00.840) 0:43:26.249 ******* 2026-02-15 06:36:50.932268 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:36:50.932273 | orchestrator | 2026-02-15 06:36:50.932277 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:36:50.932281 | orchestrator | Sunday 15 February 2026 06:36:48 +0000 (0:00:00.809) 0:43:27.059 ******* 2026-02-15 06:36:50.932285 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:36:50.932289 | orchestrator | 2026-02-15 06:36:50.932293 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:36:50.932298 | orchestrator | Sunday 15 February 2026 06:36:49 +0000 (0:00:00.896) 0:43:27.956 ******* 2026-02-15 06:36:50.932302 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:36:50.932306 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:36:50.932313 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:37:40.917488 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.917590 | orchestrator | 2026-02-15 06:37:40.917603 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:37:40.917614 | orchestrator | Sunday 15 February 2026 06:36:50 +0000 (0:00:01.066) 0:43:29.023 ******* 2026-02-15 06:37:40.917622 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:37:40.917630 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:37:40.917638 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:37:40.917646 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.917654 | orchestrator | 2026-02-15 06:37:40.917662 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:37:40.917670 | orchestrator | Sunday 15 February 2026 06:36:52 +0000 (0:00:01.430) 0:43:30.453 ******* 2026-02-15 06:37:40.917678 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:37:40.917686 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:37:40.917694 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:37:40.917702 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.917710 | orchestrator | 2026-02-15 06:37:40.917718 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:37:40.917725 | orchestrator | Sunday 15 February 2026 06:36:54 +0000 (0:00:01.651) 0:43:32.105 ******* 2026-02-15 06:37:40.917733 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:37:40.917742 | orchestrator | 2026-02-15 06:37:40.917750 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:37:40.917778 | orchestrator | Sunday 15 February 2026 06:36:54 +0000 (0:00:00.933) 0:43:33.038 ******* 2026-02-15 06:37:40.917786 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-15 06:37:40.917794 | orchestrator | 2026-02-15 06:37:40.917802 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:37:40.917810 | orchestrator | Sunday 15 February 2026 06:36:55 +0000 (0:00:01.064) 0:43:34.103 ******* 2026-02-15 06:37:40.917818 | orchestrator | changed: [testbed-node-4] 2026-02-15 06:37:40.917826 | orchestrator | 2026-02-15 06:37:40.917833 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-15 06:37:40.917841 | orchestrator | Sunday 15 February 2026 06:36:57 +0000 (0:00:01.551) 0:43:35.655 ******* 2026-02-15 06:37:40.917849 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:37:40.917857 | orchestrator | 2026-02-15 06:37:40.917865 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:37:40.917873 | orchestrator | Sunday 15 February 2026 06:36:58 +0000 (0:00:00.855) 0:43:36.510 ******* 2026-02-15 06:37:40.917881 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:37:40.917890 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:37:40.917897 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:37:40.917905 | orchestrator | 2026-02-15 06:37:40.917913 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-15 06:37:40.917920 | orchestrator | Sunday 15 February 2026 06:36:59 +0000 (0:00:01.320) 0:43:37.831 ******* 2026-02-15 06:37:40.917928 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-4 2026-02-15 06:37:40.917936 | orchestrator | 2026-02-15 06:37:40.917944 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-15 06:37:40.917953 | orchestrator | Sunday 15 February 2026 06:37:00 +0000 (0:00:01.106) 0:43:38.938 ******* 2026-02-15 06:37:40.917962 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.917971 | orchestrator | 2026-02-15 06:37:40.917980 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-15 06:37:40.917989 | orchestrator | Sunday 15 February 2026 06:37:01 +0000 (0:00:01.143) 0:43:40.081 ******* 2026-02-15 06:37:40.917997 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.918006 | orchestrator | 2026-02-15 06:37:40.918057 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-15 06:37:40.918066 | orchestrator | Sunday 15 February 2026 06:37:03 +0000 (0:00:01.189) 0:43:41.271 ******* 2026-02-15 06:37:40.918073 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:37:40.918081 | orchestrator | 2026-02-15 06:37:40.918089 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-15 06:37:40.918096 | orchestrator | Sunday 15 February 2026 06:37:04 +0000 (0:00:01.472) 0:43:42.743 ******* 2026-02-15 06:37:40.918103 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:37:40.918111 | orchestrator | 2026-02-15 06:37:40.918119 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-15 06:37:40.918127 | orchestrator | Sunday 15 February 2026 06:37:05 +0000 (0:00:01.189) 0:43:43.933 ******* 2026-02-15 06:37:40.918135 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-15 06:37:40.918144 | orchestrator | ok: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-15 06:37:40.918151 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-15 06:37:40.918159 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-15 06:37:40.918167 | orchestrator | ok: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-15 06:37:40.918175 | orchestrator | 2026-02-15 06:37:40.918182 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-15 06:37:40.918196 | orchestrator | Sunday 15 February 2026 06:37:08 +0000 (0:00:02.554) 0:43:46.487 ******* 2026-02-15 06:37:40.918203 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.918211 | orchestrator | 2026-02-15 06:37:40.918218 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-15 06:37:40.918226 | orchestrator | Sunday 15 February 2026 06:37:09 +0000 (0:00:00.750) 0:43:47.238 ******* 2026-02-15 06:37:40.918251 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-4 2026-02-15 06:37:40.918260 | orchestrator | 2026-02-15 06:37:40.918268 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-15 06:37:40.918275 | orchestrator | Sunday 15 February 2026 06:37:10 +0000 (0:00:01.153) 0:43:48.391 ******* 2026-02-15 06:37:40.918282 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-15 06:37:40.918290 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-02-15 06:37:40.918297 | orchestrator | 2026-02-15 06:37:40.918304 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-15 06:37:40.918312 | orchestrator | Sunday 15 February 2026 06:37:12 +0000 (0:00:01.866) 0:43:50.258 ******* 2026-02-15 06:37:40.918320 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:37:40.918328 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-15 06:37:40.918335 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 06:37:40.918342 | orchestrator | 2026-02-15 06:37:40.918351 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-15 06:37:40.918376 | orchestrator | Sunday 15 February 2026 06:37:15 +0000 (0:00:03.244) 0:43:53.503 ******* 2026-02-15 06:37:40.918383 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-15 06:37:40.918390 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-15 06:37:40.918397 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:37:40.918404 | orchestrator | 2026-02-15 06:37:40.918410 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-15 06:37:40.918417 | orchestrator | Sunday 15 February 2026 06:37:16 +0000 (0:00:01.594) 0:43:55.098 ******* 2026-02-15 06:37:40.918424 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.918430 | orchestrator | 2026-02-15 06:37:40.918437 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-15 06:37:40.918443 | orchestrator | Sunday 15 February 2026 06:37:17 +0000 (0:00:00.939) 0:43:56.038 ******* 2026-02-15 06:37:40.918450 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.918457 | orchestrator | 2026-02-15 06:37:40.918463 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-15 06:37:40.918470 | orchestrator | Sunday 15 February 2026 06:37:18 +0000 (0:00:00.786) 0:43:56.825 ******* 2026-02-15 06:37:40.918477 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.918483 | orchestrator | 2026-02-15 06:37:40.918490 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-15 06:37:40.918496 | orchestrator | Sunday 15 February 2026 06:37:19 +0000 (0:00:00.772) 0:43:57.597 ******* 2026-02-15 06:37:40.918503 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-4 2026-02-15 06:37:40.918510 | orchestrator | 2026-02-15 06:37:40.918516 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-15 06:37:40.918523 | orchestrator | Sunday 15 February 2026 06:37:20 +0000 (0:00:01.167) 0:43:58.765 ******* 2026-02-15 06:37:40.918529 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:37:40.918536 | orchestrator | 2026-02-15 06:37:40.918543 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-15 06:37:40.918549 | orchestrator | Sunday 15 February 2026 06:37:22 +0000 (0:00:01.480) 0:44:00.246 ******* 2026-02-15 06:37:40.918556 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:37:40.918562 | orchestrator | 2026-02-15 06:37:40.918569 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-15 06:37:40.918576 | orchestrator | Sunday 15 February 2026 06:37:25 +0000 (0:00:03.401) 0:44:03.647 ******* 2026-02-15 06:37:40.918587 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-4 2026-02-15 06:37:40.918594 | orchestrator | 2026-02-15 06:37:40.918600 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-15 06:37:40.918607 | orchestrator | Sunday 15 February 2026 06:37:26 +0000 (0:00:01.252) 0:44:04.900 ******* 2026-02-15 06:37:40.918614 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:37:40.918620 | orchestrator | 2026-02-15 06:37:40.918627 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-15 06:37:40.918633 | orchestrator | Sunday 15 February 2026 06:37:29 +0000 (0:00:02.966) 0:44:07.866 ******* 2026-02-15 06:37:40.918640 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:37:40.918647 | orchestrator | 2026-02-15 06:37:40.918653 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-15 06:37:40.918660 | orchestrator | Sunday 15 February 2026 06:37:31 +0000 (0:00:01.923) 0:44:09.790 ******* 2026-02-15 06:37:40.918666 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:37:40.918673 | orchestrator | 2026-02-15 06:37:40.918680 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-15 06:37:40.918687 | orchestrator | Sunday 15 February 2026 06:37:33 +0000 (0:00:02.246) 0:44:12.036 ******* 2026-02-15 06:37:40.918693 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.918700 | orchestrator | 2026-02-15 06:37:40.918706 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-15 06:37:40.918713 | orchestrator | Sunday 15 February 2026 06:37:35 +0000 (0:00:01.158) 0:44:13.195 ******* 2026-02-15 06:37:40.918720 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:37:40.918726 | orchestrator | 2026-02-15 06:37:40.918733 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-15 06:37:40.918739 | orchestrator | Sunday 15 February 2026 06:37:36 +0000 (0:00:01.155) 0:44:14.350 ******* 2026-02-15 06:37:40.918746 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-15 06:37:40.918753 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-15 06:37:40.918759 | orchestrator | 2026-02-15 06:37:40.918766 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-15 06:37:40.918772 | orchestrator | Sunday 15 February 2026 06:37:38 +0000 (0:00:01.799) 0:44:16.150 ******* 2026-02-15 06:37:40.918779 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-02-15 06:37:40.918786 | orchestrator | ok: [testbed-node-4] => (item=5) 2026-02-15 06:37:40.918793 | orchestrator | 2026-02-15 06:37:40.918799 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-15 06:37:40.918814 | orchestrator | Sunday 15 February 2026 06:37:40 +0000 (0:00:02.858) 0:44:19.009 ******* 2026-02-15 06:38:31.881269 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-02-15 06:38:31.881401 | orchestrator | changed: [testbed-node-4] => (item=5) 2026-02-15 06:38:31.881418 | orchestrator | 2026-02-15 06:38:31.881430 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-15 06:38:31.881441 | orchestrator | Sunday 15 February 2026 06:37:45 +0000 (0:00:04.129) 0:44:23.138 ******* 2026-02-15 06:38:31.881451 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.881461 | orchestrator | 2026-02-15 06:38:31.881471 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-15 06:38:31.881481 | orchestrator | Sunday 15 February 2026 06:37:45 +0000 (0:00:00.913) 0:44:24.052 ******* 2026-02-15 06:38:31.881490 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.881500 | orchestrator | 2026-02-15 06:38:31.881509 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-15 06:38:31.881519 | orchestrator | Sunday 15 February 2026 06:37:46 +0000 (0:00:00.961) 0:44:25.013 ******* 2026-02-15 06:38:31.881528 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.881538 | orchestrator | 2026-02-15 06:38:31.881547 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-15 06:38:31.881557 | orchestrator | Sunday 15 February 2026 06:37:47 +0000 (0:00:01.008) 0:44:26.022 ******* 2026-02-15 06:38:31.881601 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.881612 | orchestrator | 2026-02-15 06:38:31.881622 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-15 06:38:31.881632 | orchestrator | Sunday 15 February 2026 06:37:48 +0000 (0:00:00.799) 0:44:26.822 ******* 2026-02-15 06:38:31.881641 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.881651 | orchestrator | 2026-02-15 06:38:31.881660 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-15 06:38:31.881670 | orchestrator | Sunday 15 February 2026 06:37:49 +0000 (0:00:00.804) 0:44:27.626 ******* 2026-02-15 06:38:31.881679 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-15 06:38:31.881690 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (599 retries left). 2026-02-15 06:38:31.881699 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (598 retries left). 2026-02-15 06:38:31.881709 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for clean pgs... (597 retries left). 2026-02-15 06:38:31.881718 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:38:31.881728 | orchestrator | 2026-02-15 06:38:31.881738 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 06:38:31.881747 | orchestrator | Sunday 15 February 2026 06:38:03 +0000 (0:00:13.878) 0:44:41.505 ******* 2026-02-15 06:38:31.881757 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.881773 | orchestrator | 2026-02-15 06:38:31.881789 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-15 06:38:31.881806 | orchestrator | Sunday 15 February 2026 06:38:04 +0000 (0:00:00.783) 0:44:42.288 ******* 2026-02-15 06:38:31.881827 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.881852 | orchestrator | 2026-02-15 06:38:31.881869 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-15 06:38:31.881886 | orchestrator | Sunday 15 February 2026 06:38:05 +0000 (0:00:00.818) 0:44:43.106 ******* 2026-02-15 06:38:31.881904 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.881922 | orchestrator | 2026-02-15 06:38:31.881939 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-15 06:38:31.881956 | orchestrator | Sunday 15 February 2026 06:38:05 +0000 (0:00:00.775) 0:44:43.882 ******* 2026-02-15 06:38:31.881971 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.881987 | orchestrator | 2026-02-15 06:38:31.882004 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-15 06:38:31.882108 | orchestrator | Sunday 15 February 2026 06:38:06 +0000 (0:00:00.774) 0:44:44.656 ******* 2026-02-15 06:38:31.882129 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.882146 | orchestrator | 2026-02-15 06:38:31.882174 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-15 06:38:31.882192 | orchestrator | Sunday 15 February 2026 06:38:07 +0000 (0:00:00.800) 0:44:45.457 ******* 2026-02-15 06:38:31.882208 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.882224 | orchestrator | 2026-02-15 06:38:31.882240 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-15 06:38:31.882257 | orchestrator | Sunday 15 February 2026 06:38:08 +0000 (0:00:00.788) 0:44:46.246 ******* 2026-02-15 06:38:31.882274 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:38:31.882307 | orchestrator | 2026-02-15 06:38:31.882326 | orchestrator | PLAY [Upgrade ceph osds cluster] *********************************************** 2026-02-15 06:38:31.882364 | orchestrator | 2026-02-15 06:38:31.882382 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:38:31.882397 | orchestrator | Sunday 15 February 2026 06:38:09 +0000 (0:00:00.973) 0:44:47.220 ******* 2026-02-15 06:38:31.882413 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-15 06:38:31.882431 | orchestrator | 2026-02-15 06:38:31.882462 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:38:31.882479 | orchestrator | Sunday 15 February 2026 06:38:10 +0000 (0:00:01.307) 0:44:48.527 ******* 2026-02-15 06:38:31.882495 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:31.882511 | orchestrator | 2026-02-15 06:38:31.882527 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:38:31.882542 | orchestrator | Sunday 15 February 2026 06:38:11 +0000 (0:00:01.492) 0:44:50.020 ******* 2026-02-15 06:38:31.882559 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:31.882576 | orchestrator | 2026-02-15 06:38:31.882592 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:38:31.882609 | orchestrator | Sunday 15 February 2026 06:38:13 +0000 (0:00:01.152) 0:44:51.172 ******* 2026-02-15 06:38:31.882665 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:31.882687 | orchestrator | 2026-02-15 06:38:31.882703 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:38:31.882718 | orchestrator | Sunday 15 February 2026 06:38:14 +0000 (0:00:01.514) 0:44:52.687 ******* 2026-02-15 06:38:31.882735 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:31.882750 | orchestrator | 2026-02-15 06:38:31.882767 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:38:31.882782 | orchestrator | Sunday 15 February 2026 06:38:15 +0000 (0:00:01.173) 0:44:53.860 ******* 2026-02-15 06:38:31.882798 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:31.882814 | orchestrator | 2026-02-15 06:38:31.882831 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:38:31.882846 | orchestrator | Sunday 15 February 2026 06:38:16 +0000 (0:00:01.175) 0:44:55.036 ******* 2026-02-15 06:38:31.882862 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:31.882872 | orchestrator | 2026-02-15 06:38:31.882882 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:38:31.882891 | orchestrator | Sunday 15 February 2026 06:38:18 +0000 (0:00:01.154) 0:44:56.190 ******* 2026-02-15 06:38:31.882901 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:31.882910 | orchestrator | 2026-02-15 06:38:31.882920 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:38:31.882929 | orchestrator | Sunday 15 February 2026 06:38:19 +0000 (0:00:01.149) 0:44:57.339 ******* 2026-02-15 06:38:31.882939 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:31.882948 | orchestrator | 2026-02-15 06:38:31.882957 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:38:31.882966 | orchestrator | Sunday 15 February 2026 06:38:20 +0000 (0:00:01.177) 0:44:58.517 ******* 2026-02-15 06:38:31.882976 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:38:31.882985 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:38:31.883055 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:38:31.883068 | orchestrator | 2026-02-15 06:38:31.883078 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:38:31.883087 | orchestrator | Sunday 15 February 2026 06:38:22 +0000 (0:00:02.004) 0:45:00.521 ******* 2026-02-15 06:38:31.883097 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:31.883106 | orchestrator | 2026-02-15 06:38:31.883116 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:38:31.883126 | orchestrator | Sunday 15 February 2026 06:38:23 +0000 (0:00:01.287) 0:45:01.809 ******* 2026-02-15 06:38:31.883135 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:38:31.883145 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:38:31.883154 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:38:31.883164 | orchestrator | 2026-02-15 06:38:31.883173 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:38:31.883193 | orchestrator | Sunday 15 February 2026 06:38:26 +0000 (0:00:03.276) 0:45:05.085 ******* 2026-02-15 06:38:31.883202 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-15 06:38:31.883212 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-15 06:38:31.883222 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-15 06:38:31.883231 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:31.883241 | orchestrator | 2026-02-15 06:38:31.883250 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:38:31.883260 | orchestrator | Sunday 15 February 2026 06:38:28 +0000 (0:00:01.874) 0:45:06.960 ******* 2026-02-15 06:38:31.883272 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:38:31.883285 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:38:31.883295 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:38:31.883304 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:31.883314 | orchestrator | 2026-02-15 06:38:31.883324 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:38:31.883386 | orchestrator | Sunday 15 February 2026 06:38:30 +0000 (0:00:01.732) 0:45:08.693 ******* 2026-02-15 06:38:31.883401 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:31.883432 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:50.969770 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:50.969888 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:50.969908 | orchestrator | 2026-02-15 06:38:50.969921 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:38:50.969933 | orchestrator | Sunday 15 February 2026 06:38:31 +0000 (0:00:01.281) 0:45:09.974 ******* 2026-02-15 06:38:50.969946 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:38:24.618213', 'end': '2026-02-15 06:38:24.665991', 'delta': '0:00:00.047778', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:38:50.969986 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:38:25.165693', 'end': '2026-02-15 06:38:25.216309', 'delta': '0:00:00.050616', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:38:50.969998 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:38:25.732160', 'end': '2026-02-15 06:38:25.782983', 'delta': '0:00:00.050823', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:38:50.970009 | orchestrator | 2026-02-15 06:38:50.970080 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:38:50.970092 | orchestrator | Sunday 15 February 2026 06:38:33 +0000 (0:00:01.241) 0:45:11.216 ******* 2026-02-15 06:38:50.970102 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:50.970114 | orchestrator | 2026-02-15 06:38:50.970125 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:38:50.970136 | orchestrator | Sunday 15 February 2026 06:38:34 +0000 (0:00:01.278) 0:45:12.494 ******* 2026-02-15 06:38:50.970146 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:50.970157 | orchestrator | 2026-02-15 06:38:50.970168 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:38:50.970179 | orchestrator | Sunday 15 February 2026 06:38:35 +0000 (0:00:01.304) 0:45:13.799 ******* 2026-02-15 06:38:50.970189 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:50.970200 | orchestrator | 2026-02-15 06:38:50.970211 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:38:50.970221 | orchestrator | Sunday 15 February 2026 06:38:36 +0000 (0:00:01.147) 0:45:14.946 ******* 2026-02-15 06:38:50.970232 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:38:50.970242 | orchestrator | 2026-02-15 06:38:50.970268 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:38:50.970279 | orchestrator | Sunday 15 February 2026 06:38:38 +0000 (0:00:02.015) 0:45:16.962 ******* 2026-02-15 06:38:50.970290 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:50.970301 | orchestrator | 2026-02-15 06:38:50.970314 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:38:50.970356 | orchestrator | Sunday 15 February 2026 06:38:40 +0000 (0:00:01.149) 0:45:18.111 ******* 2026-02-15 06:38:50.970387 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:50.970400 | orchestrator | 2026-02-15 06:38:50.970412 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:38:50.970425 | orchestrator | Sunday 15 February 2026 06:38:41 +0000 (0:00:01.135) 0:45:19.247 ******* 2026-02-15 06:38:50.970437 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:50.970448 | orchestrator | 2026-02-15 06:38:50.970459 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:38:50.970479 | orchestrator | Sunday 15 February 2026 06:38:42 +0000 (0:00:01.266) 0:45:20.514 ******* 2026-02-15 06:38:50.970489 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:50.970500 | orchestrator | 2026-02-15 06:38:50.970511 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:38:50.970522 | orchestrator | Sunday 15 February 2026 06:38:43 +0000 (0:00:01.165) 0:45:21.680 ******* 2026-02-15 06:38:50.970532 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:50.970543 | orchestrator | 2026-02-15 06:38:50.970553 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:38:50.970564 | orchestrator | Sunday 15 February 2026 06:38:44 +0000 (0:00:01.133) 0:45:22.814 ******* 2026-02-15 06:38:50.970575 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:50.970585 | orchestrator | 2026-02-15 06:38:50.970596 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:38:50.970607 | orchestrator | Sunday 15 February 2026 06:38:45 +0000 (0:00:01.277) 0:45:24.091 ******* 2026-02-15 06:38:50.970617 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:50.970628 | orchestrator | 2026-02-15 06:38:50.970638 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:38:50.970649 | orchestrator | Sunday 15 February 2026 06:38:47 +0000 (0:00:01.122) 0:45:25.214 ******* 2026-02-15 06:38:50.970660 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:50.970671 | orchestrator | 2026-02-15 06:38:50.970681 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:38:50.970692 | orchestrator | Sunday 15 February 2026 06:38:48 +0000 (0:00:01.164) 0:45:26.379 ******* 2026-02-15 06:38:50.970703 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:50.970713 | orchestrator | 2026-02-15 06:38:50.970724 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:38:50.970736 | orchestrator | Sunday 15 February 2026 06:38:49 +0000 (0:00:01.284) 0:45:27.664 ******* 2026-02-15 06:38:50.970746 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:38:50.970757 | orchestrator | 2026-02-15 06:38:50.970767 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:38:50.970778 | orchestrator | Sunday 15 February 2026 06:38:50 +0000 (0:00:01.154) 0:45:28.818 ******* 2026-02-15 06:38:50.970790 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:38:50.970802 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'uuids': ['dcdf938a-1e00-4f8c-ba32-16bd01cbd7b7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z']}})  2026-02-15 06:38:50.970815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ca6afbc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:38:50.970847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9']}})  2026-02-15 06:38:52.107693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:38:52.107801 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:38:52.107818 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:38:52.107833 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:38:52.107845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP', 'dm-uuid-CRYPT-LUKS2-ddc473233b6d4a8581ea0c389df91130-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:38:52.107856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:38:52.107885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'uuids': ['ddc47323-3b6d-4a85-81ea-0c389df91130'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP']}})  2026-02-15 06:38:52.107941 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978']}})  2026-02-15 06:38:52.107954 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:38:52.107970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3b30427', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:38:52.107993 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:38:52.108011 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:38:52.108032 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z', 'dm-uuid-CRYPT-LUKS2-dcdf938a1e004f8cba3216bd01cbd7b7-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:38:52.377127 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:38:52.377230 | orchestrator | 2026-02-15 06:38:52.377246 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:38:52.377259 | orchestrator | Sunday 15 February 2026 06:38:52 +0000 (0:00:01.384) 0:45:30.202 ******* 2026-02-15 06:38:52.377273 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:52.377287 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'uuids': ['dcdf938a-1e00-4f8c-ba32-16bd01cbd7b7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:52.377300 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ca6afbc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:52.377392 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:52.377428 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:52.377441 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:52.377453 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:52.377464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:52.377475 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP', 'dm-uuid-CRYPT-LUKS2-ddc473233b6d4a8581ea0c389df91130-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:52.377501 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:38:52.377520 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'uuids': ['ddc47323-3b6d-4a85-81ea-0c389df91130'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:39:05.824268 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:39:05.824480 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:39:05.824517 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3b30427', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:39:05.824571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:39:05.824584 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:39:05.824597 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z', 'dm-uuid-CRYPT-LUKS2-dcdf938a1e004f8cba3216bd01cbd7b7-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:39:05.824617 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:05.824631 | orchestrator | 2026-02-15 06:39:05.824643 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:39:05.824655 | orchestrator | Sunday 15 February 2026 06:38:53 +0000 (0:00:01.527) 0:45:31.730 ******* 2026-02-15 06:39:05.824666 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:05.824677 | orchestrator | 2026-02-15 06:39:05.824688 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:39:05.824699 | orchestrator | Sunday 15 February 2026 06:38:55 +0000 (0:00:01.592) 0:45:33.323 ******* 2026-02-15 06:39:05.824709 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:05.824720 | orchestrator | 2026-02-15 06:39:05.824730 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:39:05.824741 | orchestrator | Sunday 15 February 2026 06:38:56 +0000 (0:00:01.123) 0:45:34.447 ******* 2026-02-15 06:39:05.824752 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:05.824763 | orchestrator | 2026-02-15 06:39:05.824782 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:39:05.824801 | orchestrator | Sunday 15 February 2026 06:38:57 +0000 (0:00:01.456) 0:45:35.904 ******* 2026-02-15 06:39:05.824826 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:05.824854 | orchestrator | 2026-02-15 06:39:05.824873 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:39:05.824891 | orchestrator | Sunday 15 February 2026 06:38:58 +0000 (0:00:01.145) 0:45:37.050 ******* 2026-02-15 06:39:05.824907 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:05.824925 | orchestrator | 2026-02-15 06:39:05.824945 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:39:05.824963 | orchestrator | Sunday 15 February 2026 06:39:00 +0000 (0:00:01.298) 0:45:38.349 ******* 2026-02-15 06:39:05.824982 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:05.825000 | orchestrator | 2026-02-15 06:39:05.825018 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:39:05.825049 | orchestrator | Sunday 15 February 2026 06:39:01 +0000 (0:00:01.151) 0:45:39.500 ******* 2026-02-15 06:39:05.825065 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-15 06:39:05.825078 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-15 06:39:05.825090 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-15 06:39:05.825103 | orchestrator | 2026-02-15 06:39:05.825115 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:39:05.825129 | orchestrator | Sunday 15 February 2026 06:39:03 +0000 (0:00:02.104) 0:45:41.605 ******* 2026-02-15 06:39:05.825142 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-15 06:39:05.825153 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-15 06:39:05.825163 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-15 06:39:05.825174 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:05.825184 | orchestrator | 2026-02-15 06:39:05.825195 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:39:05.825206 | orchestrator | Sunday 15 February 2026 06:39:04 +0000 (0:00:01.182) 0:45:42.788 ******* 2026-02-15 06:39:05.825216 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-15 06:39:05.825228 | orchestrator | 2026-02-15 06:39:05.825250 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:39:49.332212 | orchestrator | Sunday 15 February 2026 06:39:05 +0000 (0:00:01.130) 0:45:43.918 ******* 2026-02-15 06:39:49.332403 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.332437 | orchestrator | 2026-02-15 06:39:49.332463 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:39:49.332481 | orchestrator | Sunday 15 February 2026 06:39:06 +0000 (0:00:01.158) 0:45:45.077 ******* 2026-02-15 06:39:49.332538 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.332559 | orchestrator | 2026-02-15 06:39:49.332577 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:39:49.332595 | orchestrator | Sunday 15 February 2026 06:39:08 +0000 (0:00:01.202) 0:45:46.280 ******* 2026-02-15 06:39:49.332614 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.332633 | orchestrator | 2026-02-15 06:39:49.332652 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:39:49.332671 | orchestrator | Sunday 15 February 2026 06:39:09 +0000 (0:00:01.191) 0:45:47.472 ******* 2026-02-15 06:39:49.332689 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.332707 | orchestrator | 2026-02-15 06:39:49.332726 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:39:49.332745 | orchestrator | Sunday 15 February 2026 06:39:10 +0000 (0:00:01.296) 0:45:48.768 ******* 2026-02-15 06:39:49.332765 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:39:49.332785 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:39:49.332803 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:39:49.332822 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.332841 | orchestrator | 2026-02-15 06:39:49.332861 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:39:49.332881 | orchestrator | Sunday 15 February 2026 06:39:11 +0000 (0:00:01.333) 0:45:50.102 ******* 2026-02-15 06:39:49.332900 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:39:49.332919 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:39:49.332938 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:39:49.332956 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.332975 | orchestrator | 2026-02-15 06:39:49.332995 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:39:49.333013 | orchestrator | Sunday 15 February 2026 06:39:13 +0000 (0:00:01.378) 0:45:51.481 ******* 2026-02-15 06:39:49.333032 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:39:49.333050 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:39:49.333069 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:39:49.333089 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.333108 | orchestrator | 2026-02-15 06:39:49.333126 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:39:49.333145 | orchestrator | Sunday 15 February 2026 06:39:14 +0000 (0:00:01.361) 0:45:52.842 ******* 2026-02-15 06:39:49.333163 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.333181 | orchestrator | 2026-02-15 06:39:49.333199 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:39:49.333216 | orchestrator | Sunday 15 February 2026 06:39:15 +0000 (0:00:01.156) 0:45:53.999 ******* 2026-02-15 06:39:49.333235 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-15 06:39:49.333253 | orchestrator | 2026-02-15 06:39:49.333270 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:39:49.333289 | orchestrator | Sunday 15 February 2026 06:39:17 +0000 (0:00:01.711) 0:45:55.710 ******* 2026-02-15 06:39:49.333332 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:39:49.333351 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:39:49.333363 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:39:49.333373 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:39:49.333384 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:39:49.333394 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-15 06:39:49.333417 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:39:49.333428 | orchestrator | 2026-02-15 06:39:49.333454 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:39:49.333465 | orchestrator | Sunday 15 February 2026 06:39:19 +0000 (0:00:02.230) 0:45:57.941 ******* 2026-02-15 06:39:49.333475 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:39:49.333486 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:39:49.333496 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:39:49.333507 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:39:49.333517 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:39:49.333528 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-15 06:39:49.333538 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:39:49.333549 | orchestrator | 2026-02-15 06:39:49.333559 | orchestrator | TASK [Get osd numbers - non container] ***************************************** 2026-02-15 06:39:49.333570 | orchestrator | Sunday 15 February 2026 06:39:22 +0000 (0:00:02.733) 0:46:00.675 ******* 2026-02-15 06:39:49.333581 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.333591 | orchestrator | 2026-02-15 06:39:49.333601 | orchestrator | TASK [Set num_osds] ************************************************************ 2026-02-15 06:39:49.333630 | orchestrator | Sunday 15 February 2026 06:39:23 +0000 (0:00:01.086) 0:46:01.761 ******* 2026-02-15 06:39:49.333641 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.333652 | orchestrator | 2026-02-15 06:39:49.333662 | orchestrator | TASK [Set_fact container_exec_cmd_osd] ***************************************** 2026-02-15 06:39:49.333673 | orchestrator | Sunday 15 February 2026 06:39:24 +0000 (0:00:00.859) 0:46:02.620 ******* 2026-02-15 06:39:49.333683 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.333694 | orchestrator | 2026-02-15 06:39:49.333705 | orchestrator | TASK [Stop ceph osd] *********************************************************** 2026-02-15 06:39:49.333715 | orchestrator | Sunday 15 February 2026 06:39:25 +0000 (0:00:01.034) 0:46:03.655 ******* 2026-02-15 06:39:49.333726 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-15 06:39:49.333737 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-15 06:39:49.333748 | orchestrator | 2026-02-15 06:39:49.333758 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:39:49.333769 | orchestrator | Sunday 15 February 2026 06:39:29 +0000 (0:00:03.697) 0:46:07.353 ******* 2026-02-15 06:39:49.333788 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-15 06:39:49.333807 | orchestrator | 2026-02-15 06:39:49.333825 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:39:49.333843 | orchestrator | Sunday 15 February 2026 06:39:30 +0000 (0:00:01.232) 0:46:08.586 ******* 2026-02-15 06:39:49.333862 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-15 06:39:49.333880 | orchestrator | 2026-02-15 06:39:49.333897 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:39:49.333914 | orchestrator | Sunday 15 February 2026 06:39:31 +0000 (0:00:01.126) 0:46:09.712 ******* 2026-02-15 06:39:49.333934 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.333952 | orchestrator | 2026-02-15 06:39:49.333970 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:39:49.333988 | orchestrator | Sunday 15 February 2026 06:39:32 +0000 (0:00:01.155) 0:46:10.868 ******* 2026-02-15 06:39:49.334007 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.334098 | orchestrator | 2026-02-15 06:39:49.334120 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:39:49.334139 | orchestrator | Sunday 15 February 2026 06:39:34 +0000 (0:00:01.527) 0:46:12.395 ******* 2026-02-15 06:39:49.334173 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.334191 | orchestrator | 2026-02-15 06:39:49.334203 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:39:49.334214 | orchestrator | Sunday 15 February 2026 06:39:35 +0000 (0:00:01.555) 0:46:13.951 ******* 2026-02-15 06:39:49.334224 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.334235 | orchestrator | 2026-02-15 06:39:49.334246 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:39:49.334256 | orchestrator | Sunday 15 February 2026 06:39:37 +0000 (0:00:01.985) 0:46:15.936 ******* 2026-02-15 06:39:49.334267 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.334277 | orchestrator | 2026-02-15 06:39:49.334288 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:39:49.334298 | orchestrator | Sunday 15 February 2026 06:39:38 +0000 (0:00:01.136) 0:46:17.073 ******* 2026-02-15 06:39:49.334349 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.334361 | orchestrator | 2026-02-15 06:39:49.334372 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:39:49.334382 | orchestrator | Sunday 15 February 2026 06:39:40 +0000 (0:00:01.237) 0:46:18.310 ******* 2026-02-15 06:39:49.334393 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.334403 | orchestrator | 2026-02-15 06:39:49.334414 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:39:49.334425 | orchestrator | Sunday 15 February 2026 06:39:41 +0000 (0:00:01.146) 0:46:19.457 ******* 2026-02-15 06:39:49.334435 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.334446 | orchestrator | 2026-02-15 06:39:49.334456 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:39:49.334467 | orchestrator | Sunday 15 February 2026 06:39:42 +0000 (0:00:01.573) 0:46:21.030 ******* 2026-02-15 06:39:49.334477 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.334488 | orchestrator | 2026-02-15 06:39:49.334498 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:39:49.334509 | orchestrator | Sunday 15 February 2026 06:39:44 +0000 (0:00:01.541) 0:46:22.572 ******* 2026-02-15 06:39:49.334520 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.334530 | orchestrator | 2026-02-15 06:39:49.334549 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:39:49.334560 | orchestrator | Sunday 15 February 2026 06:39:45 +0000 (0:00:00.811) 0:46:23.383 ******* 2026-02-15 06:39:49.334571 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.334581 | orchestrator | 2026-02-15 06:39:49.334592 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:39:49.334603 | orchestrator | Sunday 15 February 2026 06:39:46 +0000 (0:00:00.786) 0:46:24.170 ******* 2026-02-15 06:39:49.334613 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.334624 | orchestrator | 2026-02-15 06:39:49.334635 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:39:49.334646 | orchestrator | Sunday 15 February 2026 06:39:46 +0000 (0:00:00.805) 0:46:24.975 ******* 2026-02-15 06:39:49.334656 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.334667 | orchestrator | 2026-02-15 06:39:49.334678 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:39:49.334689 | orchestrator | Sunday 15 February 2026 06:39:47 +0000 (0:00:00.829) 0:46:25.804 ******* 2026-02-15 06:39:49.334699 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:39:49.334710 | orchestrator | 2026-02-15 06:39:49.334720 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:39:49.334731 | orchestrator | Sunday 15 February 2026 06:39:48 +0000 (0:00:00.840) 0:46:26.645 ******* 2026-02-15 06:39:49.334741 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:39:49.334752 | orchestrator | 2026-02-15 06:39:49.334775 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:40:30.290159 | orchestrator | Sunday 15 February 2026 06:39:49 +0000 (0:00:00.781) 0:46:27.427 ******* 2026-02-15 06:40:30.290352 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290370 | orchestrator | 2026-02-15 06:40:30.290381 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:40:30.290392 | orchestrator | Sunday 15 February 2026 06:39:50 +0000 (0:00:00.775) 0:46:28.202 ******* 2026-02-15 06:40:30.290402 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290412 | orchestrator | 2026-02-15 06:40:30.290423 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:40:30.290433 | orchestrator | Sunday 15 February 2026 06:39:50 +0000 (0:00:00.895) 0:46:29.098 ******* 2026-02-15 06:40:30.290442 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:40:30.290453 | orchestrator | 2026-02-15 06:40:30.290463 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:40:30.290472 | orchestrator | Sunday 15 February 2026 06:39:51 +0000 (0:00:00.785) 0:46:29.884 ******* 2026-02-15 06:40:30.290482 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:40:30.290491 | orchestrator | 2026-02-15 06:40:30.290500 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:40:30.290509 | orchestrator | Sunday 15 February 2026 06:39:52 +0000 (0:00:00.783) 0:46:30.668 ******* 2026-02-15 06:40:30.290519 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290540 | orchestrator | 2026-02-15 06:40:30.290549 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:40:30.290557 | orchestrator | Sunday 15 February 2026 06:39:53 +0000 (0:00:00.786) 0:46:31.455 ******* 2026-02-15 06:40:30.290566 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290575 | orchestrator | 2026-02-15 06:40:30.290584 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:40:30.290592 | orchestrator | Sunday 15 February 2026 06:39:54 +0000 (0:00:00.801) 0:46:32.256 ******* 2026-02-15 06:40:30.290601 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290609 | orchestrator | 2026-02-15 06:40:30.290618 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:40:30.290627 | orchestrator | Sunday 15 February 2026 06:39:54 +0000 (0:00:00.815) 0:46:33.072 ******* 2026-02-15 06:40:30.290636 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290645 | orchestrator | 2026-02-15 06:40:30.290653 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:40:30.290662 | orchestrator | Sunday 15 February 2026 06:39:55 +0000 (0:00:00.792) 0:46:33.865 ******* 2026-02-15 06:40:30.290672 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290681 | orchestrator | 2026-02-15 06:40:30.290689 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:40:30.290698 | orchestrator | Sunday 15 February 2026 06:39:56 +0000 (0:00:00.796) 0:46:34.661 ******* 2026-02-15 06:40:30.290707 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290717 | orchestrator | 2026-02-15 06:40:30.290726 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:40:30.290735 | orchestrator | Sunday 15 February 2026 06:39:57 +0000 (0:00:00.778) 0:46:35.439 ******* 2026-02-15 06:40:30.290744 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290753 | orchestrator | 2026-02-15 06:40:30.290763 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:40:30.290773 | orchestrator | Sunday 15 February 2026 06:39:58 +0000 (0:00:00.796) 0:46:36.236 ******* 2026-02-15 06:40:30.290782 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290789 | orchestrator | 2026-02-15 06:40:30.290795 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:40:30.290801 | orchestrator | Sunday 15 February 2026 06:39:58 +0000 (0:00:00.835) 0:46:37.072 ******* 2026-02-15 06:40:30.290806 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290811 | orchestrator | 2026-02-15 06:40:30.290817 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:40:30.290822 | orchestrator | Sunday 15 February 2026 06:39:59 +0000 (0:00:00.796) 0:46:37.868 ******* 2026-02-15 06:40:30.290838 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290844 | orchestrator | 2026-02-15 06:40:30.290849 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:40:30.290855 | orchestrator | Sunday 15 February 2026 06:40:00 +0000 (0:00:00.889) 0:46:38.758 ******* 2026-02-15 06:40:30.290860 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290865 | orchestrator | 2026-02-15 06:40:30.290871 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:40:30.290898 | orchestrator | Sunday 15 February 2026 06:40:01 +0000 (0:00:00.836) 0:46:39.594 ******* 2026-02-15 06:40:30.290904 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290910 | orchestrator | 2026-02-15 06:40:30.290915 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:40:30.290920 | orchestrator | Sunday 15 February 2026 06:40:02 +0000 (0:00:00.778) 0:46:40.373 ******* 2026-02-15 06:40:30.290926 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:40:30.290931 | orchestrator | 2026-02-15 06:40:30.290937 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:40:30.290942 | orchestrator | Sunday 15 February 2026 06:40:03 +0000 (0:00:01.607) 0:46:41.980 ******* 2026-02-15 06:40:30.290947 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:40:30.290953 | orchestrator | 2026-02-15 06:40:30.290959 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:40:30.290964 | orchestrator | Sunday 15 February 2026 06:40:05 +0000 (0:00:01.877) 0:46:43.858 ******* 2026-02-15 06:40:30.290969 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-15 06:40:30.290976 | orchestrator | 2026-02-15 06:40:30.290981 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:40:30.290987 | orchestrator | Sunday 15 February 2026 06:40:06 +0000 (0:00:01.141) 0:46:44.999 ******* 2026-02-15 06:40:30.290992 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.290997 | orchestrator | 2026-02-15 06:40:30.291003 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:40:30.291025 | orchestrator | Sunday 15 February 2026 06:40:08 +0000 (0:00:01.124) 0:46:46.124 ******* 2026-02-15 06:40:30.291030 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291036 | orchestrator | 2026-02-15 06:40:30.291041 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:40:30.291047 | orchestrator | Sunday 15 February 2026 06:40:09 +0000 (0:00:01.147) 0:46:47.271 ******* 2026-02-15 06:40:30.291052 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:40:30.291060 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:40:30.291068 | orchestrator | 2026-02-15 06:40:30.291077 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:40:30.291085 | orchestrator | Sunday 15 February 2026 06:40:10 +0000 (0:00:01.828) 0:46:49.099 ******* 2026-02-15 06:40:30.291094 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:40:30.291102 | orchestrator | 2026-02-15 06:40:30.291110 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:40:30.291119 | orchestrator | Sunday 15 February 2026 06:40:12 +0000 (0:00:01.444) 0:46:50.543 ******* 2026-02-15 06:40:30.291129 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291138 | orchestrator | 2026-02-15 06:40:30.291146 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:40:30.291154 | orchestrator | Sunday 15 February 2026 06:40:13 +0000 (0:00:01.209) 0:46:51.753 ******* 2026-02-15 06:40:30.291163 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291169 | orchestrator | 2026-02-15 06:40:30.291177 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:40:30.291185 | orchestrator | Sunday 15 February 2026 06:40:14 +0000 (0:00:00.876) 0:46:52.629 ******* 2026-02-15 06:40:30.291194 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291210 | orchestrator | 2026-02-15 06:40:30.291220 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:40:30.291229 | orchestrator | Sunday 15 February 2026 06:40:15 +0000 (0:00:00.796) 0:46:53.425 ******* 2026-02-15 06:40:30.291238 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-15 06:40:30.291246 | orchestrator | 2026-02-15 06:40:30.291255 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:40:30.291264 | orchestrator | Sunday 15 February 2026 06:40:16 +0000 (0:00:01.181) 0:46:54.606 ******* 2026-02-15 06:40:30.291272 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:40:30.291280 | orchestrator | 2026-02-15 06:40:30.291306 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:40:30.291315 | orchestrator | Sunday 15 February 2026 06:40:18 +0000 (0:00:01.736) 0:46:56.343 ******* 2026-02-15 06:40:30.291324 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:40:30.291333 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:40:30.291341 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:40:30.291350 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291371 | orchestrator | 2026-02-15 06:40:30.291381 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:40:30.291390 | orchestrator | Sunday 15 February 2026 06:40:19 +0000 (0:00:01.144) 0:46:57.488 ******* 2026-02-15 06:40:30.291399 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291408 | orchestrator | 2026-02-15 06:40:30.291416 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:40:30.291425 | orchestrator | Sunday 15 February 2026 06:40:20 +0000 (0:00:01.166) 0:46:58.655 ******* 2026-02-15 06:40:30.291434 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291443 | orchestrator | 2026-02-15 06:40:30.291452 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:40:30.291461 | orchestrator | Sunday 15 February 2026 06:40:21 +0000 (0:00:01.184) 0:46:59.839 ******* 2026-02-15 06:40:30.291470 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291480 | orchestrator | 2026-02-15 06:40:30.291489 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:40:30.291498 | orchestrator | Sunday 15 February 2026 06:40:22 +0000 (0:00:01.148) 0:47:00.988 ******* 2026-02-15 06:40:30.291507 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291516 | orchestrator | 2026-02-15 06:40:30.291532 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:40:30.291542 | orchestrator | Sunday 15 February 2026 06:40:24 +0000 (0:00:01.260) 0:47:02.249 ******* 2026-02-15 06:40:30.291550 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291558 | orchestrator | 2026-02-15 06:40:30.291567 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:40:30.291575 | orchestrator | Sunday 15 February 2026 06:40:24 +0000 (0:00:00.789) 0:47:03.038 ******* 2026-02-15 06:40:30.291585 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:40:30.291595 | orchestrator | 2026-02-15 06:40:30.291605 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:40:30.291614 | orchestrator | Sunday 15 February 2026 06:40:27 +0000 (0:00:02.105) 0:47:05.143 ******* 2026-02-15 06:40:30.291623 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:40:30.291632 | orchestrator | 2026-02-15 06:40:30.291641 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:40:30.291650 | orchestrator | Sunday 15 February 2026 06:40:27 +0000 (0:00:00.790) 0:47:05.934 ******* 2026-02-15 06:40:30.291671 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-15 06:40:30.291680 | orchestrator | 2026-02-15 06:40:30.291689 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:40:30.291706 | orchestrator | Sunday 15 February 2026 06:40:29 +0000 (0:00:01.288) 0:47:07.222 ******* 2026-02-15 06:40:30.291715 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:40:30.291724 | orchestrator | 2026-02-15 06:40:30.291733 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:40:30.291754 | orchestrator | Sunday 15 February 2026 06:40:30 +0000 (0:00:01.157) 0:47:08.380 ******* 2026-02-15 06:41:14.441900 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.442145 | orchestrator | 2026-02-15 06:41:14.442178 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:41:14.442198 | orchestrator | Sunday 15 February 2026 06:40:31 +0000 (0:00:01.131) 0:47:09.512 ******* 2026-02-15 06:41:14.442219 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.442239 | orchestrator | 2026-02-15 06:41:14.442257 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:41:14.442321 | orchestrator | Sunday 15 February 2026 06:40:32 +0000 (0:00:01.142) 0:47:10.655 ******* 2026-02-15 06:41:14.442342 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.442361 | orchestrator | 2026-02-15 06:41:14.442380 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:41:14.442400 | orchestrator | Sunday 15 February 2026 06:40:33 +0000 (0:00:01.145) 0:47:11.801 ******* 2026-02-15 06:41:14.442422 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.442442 | orchestrator | 2026-02-15 06:41:14.442463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:41:14.442485 | orchestrator | Sunday 15 February 2026 06:40:34 +0000 (0:00:01.141) 0:47:12.943 ******* 2026-02-15 06:41:14.442505 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.442526 | orchestrator | 2026-02-15 06:41:14.442546 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:41:14.442566 | orchestrator | Sunday 15 February 2026 06:40:36 +0000 (0:00:01.223) 0:47:14.166 ******* 2026-02-15 06:41:14.442588 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.442608 | orchestrator | 2026-02-15 06:41:14.442627 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:41:14.442647 | orchestrator | Sunday 15 February 2026 06:40:37 +0000 (0:00:01.262) 0:47:15.429 ******* 2026-02-15 06:41:14.442668 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.442688 | orchestrator | 2026-02-15 06:41:14.442708 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:41:14.442728 | orchestrator | Sunday 15 February 2026 06:40:38 +0000 (0:00:01.188) 0:47:16.618 ******* 2026-02-15 06:41:14.442749 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:41:14.442771 | orchestrator | 2026-02-15 06:41:14.442789 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:41:14.442809 | orchestrator | Sunday 15 February 2026 06:40:39 +0000 (0:00:00.803) 0:47:17.421 ******* 2026-02-15 06:41:14.442828 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-15 06:41:14.442847 | orchestrator | 2026-02-15 06:41:14.442865 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:41:14.442884 | orchestrator | Sunday 15 February 2026 06:40:40 +0000 (0:00:01.130) 0:47:18.552 ******* 2026-02-15 06:41:14.442903 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-15 06:41:14.442923 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-15 06:41:14.442942 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-15 06:41:14.442961 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-15 06:41:14.442981 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-15 06:41:14.443001 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-15 06:41:14.443019 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-15 06:41:14.443037 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:41:14.443056 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:41:14.443113 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:41:14.443134 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:41:14.443153 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:41:14.443172 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:41:14.443189 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:41:14.443206 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-15 06:41:14.443223 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-15 06:41:14.443241 | orchestrator | 2026-02-15 06:41:14.443311 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:41:14.443334 | orchestrator | Sunday 15 February 2026 06:40:46 +0000 (0:00:06.186) 0:47:24.738 ******* 2026-02-15 06:41:14.443352 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-15 06:41:14.443374 | orchestrator | 2026-02-15 06:41:14.443394 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-15 06:41:14.443415 | orchestrator | Sunday 15 February 2026 06:40:47 +0000 (0:00:01.163) 0:47:25.902 ******* 2026-02-15 06:41:14.443435 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 06:41:14.443457 | orchestrator | 2026-02-15 06:41:14.443476 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-15 06:41:14.443497 | orchestrator | Sunday 15 February 2026 06:40:49 +0000 (0:00:01.494) 0:47:27.397 ******* 2026-02-15 06:41:14.443518 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 06:41:14.443538 | orchestrator | 2026-02-15 06:41:14.443558 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:41:14.443576 | orchestrator | Sunday 15 February 2026 06:40:50 +0000 (0:00:01.628) 0:47:29.025 ******* 2026-02-15 06:41:14.443597 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.443619 | orchestrator | 2026-02-15 06:41:14.443639 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:41:14.443685 | orchestrator | Sunday 15 February 2026 06:40:51 +0000 (0:00:00.824) 0:47:29.850 ******* 2026-02-15 06:41:14.443706 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.443727 | orchestrator | 2026-02-15 06:41:14.443747 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:41:14.443765 | orchestrator | Sunday 15 February 2026 06:40:52 +0000 (0:00:00.786) 0:47:30.636 ******* 2026-02-15 06:41:14.443785 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.443906 | orchestrator | 2026-02-15 06:41:14.443931 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:41:14.443949 | orchestrator | Sunday 15 February 2026 06:40:53 +0000 (0:00:00.781) 0:47:31.418 ******* 2026-02-15 06:41:14.443967 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.443985 | orchestrator | 2026-02-15 06:41:14.444002 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:41:14.444019 | orchestrator | Sunday 15 February 2026 06:40:54 +0000 (0:00:00.802) 0:47:32.220 ******* 2026-02-15 06:41:14.444037 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.444056 | orchestrator | 2026-02-15 06:41:14.444076 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:41:14.444095 | orchestrator | Sunday 15 February 2026 06:40:54 +0000 (0:00:00.863) 0:47:33.084 ******* 2026-02-15 06:41:14.444114 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.444133 | orchestrator | 2026-02-15 06:41:14.444151 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:41:14.444169 | orchestrator | Sunday 15 February 2026 06:40:55 +0000 (0:00:00.891) 0:47:33.976 ******* 2026-02-15 06:41:14.444209 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.444252 | orchestrator | 2026-02-15 06:41:14.444345 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:41:14.444367 | orchestrator | Sunday 15 February 2026 06:40:56 +0000 (0:00:00.789) 0:47:34.765 ******* 2026-02-15 06:41:14.444383 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.444399 | orchestrator | 2026-02-15 06:41:14.444415 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:41:14.444433 | orchestrator | Sunday 15 February 2026 06:40:57 +0000 (0:00:00.788) 0:47:35.554 ******* 2026-02-15 06:41:14.444451 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.444469 | orchestrator | 2026-02-15 06:41:14.444484 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:41:14.444500 | orchestrator | Sunday 15 February 2026 06:40:58 +0000 (0:00:00.836) 0:47:36.390 ******* 2026-02-15 06:41:14.444515 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.444534 | orchestrator | 2026-02-15 06:41:14.444552 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:41:14.444570 | orchestrator | Sunday 15 February 2026 06:40:59 +0000 (0:00:00.785) 0:47:37.176 ******* 2026-02-15 06:41:14.444588 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:41:14.444604 | orchestrator | 2026-02-15 06:41:14.444620 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:41:14.444636 | orchestrator | Sunday 15 February 2026 06:40:59 +0000 (0:00:00.879) 0:47:38.055 ******* 2026-02-15 06:41:14.444651 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-15 06:41:14.444667 | orchestrator | 2026-02-15 06:41:14.444682 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:41:14.444697 | orchestrator | Sunday 15 February 2026 06:41:03 +0000 (0:00:03.893) 0:47:41.948 ******* 2026-02-15 06:41:14.444713 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 06:41:14.444731 | orchestrator | 2026-02-15 06:41:14.444749 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:41:14.444767 | orchestrator | Sunday 15 February 2026 06:41:04 +0000 (0:00:00.834) 0:47:42.783 ******* 2026-02-15 06:41:14.444797 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-15 06:41:14.444816 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-15 06:41:14.444834 | orchestrator | 2026-02-15 06:41:14.444851 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:41:14.444868 | orchestrator | Sunday 15 February 2026 06:41:11 +0000 (0:00:07.268) 0:47:50.052 ******* 2026-02-15 06:41:14.444884 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.444901 | orchestrator | 2026-02-15 06:41:14.444916 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:41:14.444931 | orchestrator | Sunday 15 February 2026 06:41:12 +0000 (0:00:00.778) 0:47:50.831 ******* 2026-02-15 06:41:14.444945 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.444961 | orchestrator | 2026-02-15 06:41:14.444974 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:41:14.444989 | orchestrator | Sunday 15 February 2026 06:41:13 +0000 (0:00:00.781) 0:47:51.612 ******* 2026-02-15 06:41:14.445019 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:41:14.445035 | orchestrator | 2026-02-15 06:41:14.445050 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:41:14.445083 | orchestrator | Sunday 15 February 2026 06:41:14 +0000 (0:00:00.923) 0:47:52.536 ******* 2026-02-15 06:42:02.685190 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.685367 | orchestrator | 2026-02-15 06:42:02.685386 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:42:02.685399 | orchestrator | Sunday 15 February 2026 06:41:15 +0000 (0:00:00.796) 0:47:53.332 ******* 2026-02-15 06:42:02.685410 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.685422 | orchestrator | 2026-02-15 06:42:02.685434 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:42:02.685445 | orchestrator | Sunday 15 February 2026 06:41:16 +0000 (0:00:00.835) 0:47:54.168 ******* 2026-02-15 06:42:02.685456 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.685468 | orchestrator | 2026-02-15 06:42:02.685478 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:42:02.685489 | orchestrator | Sunday 15 February 2026 06:41:17 +0000 (0:00:00.950) 0:47:55.118 ******* 2026-02-15 06:42:02.685500 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:42:02.685511 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:42:02.685522 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:42:02.685533 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.685544 | orchestrator | 2026-02-15 06:42:02.685555 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:42:02.685571 | orchestrator | Sunday 15 February 2026 06:41:18 +0000 (0:00:01.466) 0:47:56.585 ******* 2026-02-15 06:42:02.685582 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:42:02.685593 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:42:02.685604 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:42:02.685614 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.685625 | orchestrator | 2026-02-15 06:42:02.685636 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:42:02.685647 | orchestrator | Sunday 15 February 2026 06:41:20 +0000 (0:00:01.534) 0:47:58.120 ******* 2026-02-15 06:42:02.685658 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:42:02.685668 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:42:02.685679 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:42:02.685690 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.685702 | orchestrator | 2026-02-15 06:42:02.685713 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:42:02.685723 | orchestrator | Sunday 15 February 2026 06:41:21 +0000 (0:00:01.105) 0:47:59.225 ******* 2026-02-15 06:42:02.685737 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.685749 | orchestrator | 2026-02-15 06:42:02.685762 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:42:02.685775 | orchestrator | Sunday 15 February 2026 06:41:21 +0000 (0:00:00.804) 0:48:00.029 ******* 2026-02-15 06:42:02.685794 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-15 06:42:02.685815 | orchestrator | 2026-02-15 06:42:02.685836 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:42:02.685858 | orchestrator | Sunday 15 February 2026 06:41:23 +0000 (0:00:01.145) 0:48:01.175 ******* 2026-02-15 06:42:02.685878 | orchestrator | changed: [testbed-node-5] 2026-02-15 06:42:02.685898 | orchestrator | 2026-02-15 06:42:02.685918 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-02-15 06:42:02.685938 | orchestrator | Sunday 15 February 2026 06:41:24 +0000 (0:00:01.413) 0:48:02.589 ******* 2026-02-15 06:42:02.685959 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.686077 | orchestrator | 2026-02-15 06:42:02.686096 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-02-15 06:42:02.686109 | orchestrator | Sunday 15 February 2026 06:41:25 +0000 (0:00:00.864) 0:48:03.453 ******* 2026-02-15 06:42:02.686120 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:42:02.686132 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:42:02.686153 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:42:02.686164 | orchestrator | 2026-02-15 06:42:02.686188 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-02-15 06:42:02.686200 | orchestrator | Sunday 15 February 2026 06:41:27 +0000 (0:00:01.730) 0:48:05.184 ******* 2026-02-15 06:42:02.686210 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-5 2026-02-15 06:42:02.686221 | orchestrator | 2026-02-15 06:42:02.686232 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-02-15 06:42:02.686243 | orchestrator | Sunday 15 February 2026 06:41:28 +0000 (0:00:01.094) 0:48:06.278 ******* 2026-02-15 06:42:02.686282 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.686295 | orchestrator | 2026-02-15 06:42:02.686306 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-02-15 06:42:02.686316 | orchestrator | Sunday 15 February 2026 06:41:29 +0000 (0:00:01.145) 0:48:07.424 ******* 2026-02-15 06:42:02.686327 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.686337 | orchestrator | 2026-02-15 06:42:02.686348 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-02-15 06:42:02.686359 | orchestrator | Sunday 15 February 2026 06:41:30 +0000 (0:00:01.164) 0:48:08.589 ******* 2026-02-15 06:42:02.686369 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.686380 | orchestrator | 2026-02-15 06:42:02.686391 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-02-15 06:42:02.686401 | orchestrator | Sunday 15 February 2026 06:41:31 +0000 (0:00:01.426) 0:48:10.015 ******* 2026-02-15 06:42:02.686412 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.686422 | orchestrator | 2026-02-15 06:42:02.686433 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-02-15 06:42:02.686444 | orchestrator | Sunday 15 February 2026 06:41:33 +0000 (0:00:01.643) 0:48:11.659 ******* 2026-02-15 06:42:02.686475 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-02-15 06:42:02.686487 | orchestrator | ok: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-02-15 06:42:02.686498 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-02-15 06:42:02.686509 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-02-15 06:42:02.686520 | orchestrator | ok: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-02-15 06:42:02.686530 | orchestrator | 2026-02-15 06:42:02.686541 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-02-15 06:42:02.686552 | orchestrator | Sunday 15 February 2026 06:41:37 +0000 (0:00:03.502) 0:48:15.161 ******* 2026-02-15 06:42:02.686563 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.686573 | orchestrator | 2026-02-15 06:42:02.686584 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-02-15 06:42:02.686595 | orchestrator | Sunday 15 February 2026 06:41:37 +0000 (0:00:00.777) 0:48:15.939 ******* 2026-02-15 06:42:02.686605 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-5 2026-02-15 06:42:02.686616 | orchestrator | 2026-02-15 06:42:02.686626 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-02-15 06:42:02.686637 | orchestrator | Sunday 15 February 2026 06:41:39 +0000 (0:00:01.206) 0:48:17.146 ******* 2026-02-15 06:42:02.686648 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-02-15 06:42:02.686669 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-02-15 06:42:02.686680 | orchestrator | 2026-02-15 06:42:02.686691 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-02-15 06:42:02.686702 | orchestrator | Sunday 15 February 2026 06:41:40 +0000 (0:00:01.904) 0:48:19.051 ******* 2026-02-15 06:42:02.686717 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:42:02.686735 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-15 06:42:02.686753 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 06:42:02.686770 | orchestrator | 2026-02-15 06:42:02.686788 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-02-15 06:42:02.686807 | orchestrator | Sunday 15 February 2026 06:41:44 +0000 (0:00:03.221) 0:48:22.273 ******* 2026-02-15 06:42:02.686819 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-15 06:42:02.686830 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-15 06:42:02.686841 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.686852 | orchestrator | 2026-02-15 06:42:02.686862 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-02-15 06:42:02.686873 | orchestrator | Sunday 15 February 2026 06:41:45 +0000 (0:00:01.599) 0:48:23.873 ******* 2026-02-15 06:42:02.686884 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.686895 | orchestrator | 2026-02-15 06:42:02.686905 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-02-15 06:42:02.686916 | orchestrator | Sunday 15 February 2026 06:41:46 +0000 (0:00:00.892) 0:48:24.766 ******* 2026-02-15 06:42:02.686927 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.686938 | orchestrator | 2026-02-15 06:42:02.686949 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-02-15 06:42:02.686959 | orchestrator | Sunday 15 February 2026 06:41:47 +0000 (0:00:00.796) 0:48:25.562 ******* 2026-02-15 06:42:02.686973 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.686992 | orchestrator | 2026-02-15 06:42:02.687010 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-02-15 06:42:02.687028 | orchestrator | Sunday 15 February 2026 06:41:48 +0000 (0:00:00.795) 0:48:26.358 ******* 2026-02-15 06:42:02.687045 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-5 2026-02-15 06:42:02.687061 | orchestrator | 2026-02-15 06:42:02.687078 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-02-15 06:42:02.687093 | orchestrator | Sunday 15 February 2026 06:41:49 +0000 (0:00:01.131) 0:48:27.489 ******* 2026-02-15 06:42:02.687110 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.687126 | orchestrator | 2026-02-15 06:42:02.687151 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-02-15 06:42:02.687166 | orchestrator | Sunday 15 February 2026 06:41:50 +0000 (0:00:01.479) 0:48:28.969 ******* 2026-02-15 06:42:02.687182 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.687199 | orchestrator | 2026-02-15 06:42:02.687216 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-02-15 06:42:02.687233 | orchestrator | Sunday 15 February 2026 06:41:54 +0000 (0:00:03.297) 0:48:32.267 ******* 2026-02-15 06:42:02.687250 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-5 2026-02-15 06:42:02.687295 | orchestrator | 2026-02-15 06:42:02.687314 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-02-15 06:42:02.687329 | orchestrator | Sunday 15 February 2026 06:41:55 +0000 (0:00:01.148) 0:48:33.415 ******* 2026-02-15 06:42:02.687347 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.687364 | orchestrator | 2026-02-15 06:42:02.687380 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-02-15 06:42:02.687397 | orchestrator | Sunday 15 February 2026 06:41:57 +0000 (0:00:01.961) 0:48:35.376 ******* 2026-02-15 06:42:02.687414 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.687431 | orchestrator | 2026-02-15 06:42:02.687448 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-02-15 06:42:02.687478 | orchestrator | Sunday 15 February 2026 06:41:59 +0000 (0:00:01.958) 0:48:37.335 ******* 2026-02-15 06:42:02.687498 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:42:02.687517 | orchestrator | 2026-02-15 06:42:02.687536 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-02-15 06:42:02.687555 | orchestrator | Sunday 15 February 2026 06:42:01 +0000 (0:00:02.211) 0:48:39.546 ******* 2026-02-15 06:42:02.687571 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:42:02.687589 | orchestrator | 2026-02-15 06:42:02.687624 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-02-15 06:44:15.606285 | orchestrator | Sunday 15 February 2026 06:42:02 +0000 (0:00:01.226) 0:48:40.773 ******* 2026-02-15 06:44:15.606408 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.606435 | orchestrator | 2026-02-15 06:44:15.606456 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-02-15 06:44:15.606477 | orchestrator | Sunday 15 February 2026 06:42:03 +0000 (0:00:01.118) 0:48:41.891 ******* 2026-02-15 06:44:15.606496 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-15 06:44:15.606515 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-15 06:44:15.606527 | orchestrator | 2026-02-15 06:44:15.606538 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-02-15 06:44:15.606549 | orchestrator | Sunday 15 February 2026 06:42:05 +0000 (0:00:01.906) 0:48:43.798 ******* 2026-02-15 06:44:15.606560 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-02-15 06:44:15.606571 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-02-15 06:44:15.606581 | orchestrator | 2026-02-15 06:44:15.606592 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-02-15 06:44:15.606603 | orchestrator | Sunday 15 February 2026 06:42:08 +0000 (0:00:02.857) 0:48:46.655 ******* 2026-02-15 06:44:15.606615 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-02-15 06:44:15.606626 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-02-15 06:44:15.606637 | orchestrator | 2026-02-15 06:44:15.606648 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-02-15 06:44:15.606659 | orchestrator | Sunday 15 February 2026 06:42:12 +0000 (0:00:04.200) 0:48:50.856 ******* 2026-02-15 06:44:15.606678 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.606725 | orchestrator | 2026-02-15 06:44:15.606740 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-02-15 06:44:15.606759 | orchestrator | Sunday 15 February 2026 06:42:13 +0000 (0:00:00.878) 0:48:51.735 ******* 2026-02-15 06:44:15.606778 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-02-15 06:44:15.606799 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:44:15.606819 | orchestrator | 2026-02-15 06:44:15.606838 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-02-15 06:44:15.606851 | orchestrator | Sunday 15 February 2026 06:42:27 +0000 (0:00:13.432) 0:49:05.167 ******* 2026-02-15 06:44:15.606863 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.606876 | orchestrator | 2026-02-15 06:44:15.606888 | orchestrator | TASK [Scan ceph-disk osds with ceph-volume if deploying nautilus] ************** 2026-02-15 06:44:15.606900 | orchestrator | Sunday 15 February 2026 06:42:27 +0000 (0:00:00.910) 0:49:06.078 ******* 2026-02-15 06:44:15.606913 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.606932 | orchestrator | 2026-02-15 06:44:15.606952 | orchestrator | TASK [Activate scanned ceph-disk osds and migrate to ceph-volume if deploying nautilus] *** 2026-02-15 06:44:15.606974 | orchestrator | Sunday 15 February 2026 06:42:28 +0000 (0:00:00.785) 0:49:06.864 ******* 2026-02-15 06:44:15.606993 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.607012 | orchestrator | 2026-02-15 06:44:15.607031 | orchestrator | TASK [Waiting for clean pgs...] ************************************************ 2026-02-15 06:44:15.607050 | orchestrator | Sunday 15 February 2026 06:42:29 +0000 (0:00:00.962) 0:49:07.827 ******* 2026-02-15 06:44:15.607105 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for clean pgs... (600 retries left). 2026-02-15 06:44:15.607120 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:44:15.607132 | orchestrator | 2026-02-15 06:44:15.607145 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-02-15 06:44:15.607158 | orchestrator | Sunday 15 February 2026 06:42:34 +0000 (0:00:05.027) 0:49:12.854 ******* 2026-02-15 06:44:15.607170 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.607186 | orchestrator | 2026-02-15 06:44:15.607206 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-02-15 06:44:15.607226 | orchestrator | Sunday 15 February 2026 06:42:35 +0000 (0:00:00.788) 0:49:13.643 ******* 2026-02-15 06:44:15.607244 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.607257 | orchestrator | 2026-02-15 06:44:15.607286 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-02-15 06:44:15.607306 | orchestrator | Sunday 15 February 2026 06:42:36 +0000 (0:00:00.771) 0:49:14.415 ******* 2026-02-15 06:44:15.607325 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.607344 | orchestrator | 2026-02-15 06:44:15.607361 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-02-15 06:44:15.607373 | orchestrator | Sunday 15 February 2026 06:42:37 +0000 (0:00:00.810) 0:49:15.225 ******* 2026-02-15 06:44:15.607383 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.607394 | orchestrator | 2026-02-15 06:44:15.607405 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-02-15 06:44:15.607415 | orchestrator | Sunday 15 February 2026 06:42:37 +0000 (0:00:00.771) 0:49:15.996 ******* 2026-02-15 06:44:15.607426 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.607436 | orchestrator | 2026-02-15 06:44:15.607447 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-02-15 06:44:15.607457 | orchestrator | Sunday 15 February 2026 06:42:38 +0000 (0:00:00.811) 0:49:16.808 ******* 2026-02-15 06:44:15.607475 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.607493 | orchestrator | 2026-02-15 06:44:15.607512 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-02-15 06:44:15.607530 | orchestrator | Sunday 15 February 2026 06:42:39 +0000 (0:00:00.769) 0:49:17.577 ******* 2026-02-15 06:44:15.607550 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:44:15.607570 | orchestrator | 2026-02-15 06:44:15.607589 | orchestrator | PLAY [Complete osd upgrade] **************************************************** 2026-02-15 06:44:15.607606 | orchestrator | 2026-02-15 06:44:15.607623 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:44:15.607634 | orchestrator | Sunday 15 February 2026 06:42:41 +0000 (0:00:02.182) 0:49:19.760 ******* 2026-02-15 06:44:15.607645 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:44:15.607656 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:44:15.607685 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:44:15.607753 | orchestrator | 2026-02-15 06:44:15.607772 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:44:15.607791 | orchestrator | Sunday 15 February 2026 06:42:43 +0000 (0:00:01.710) 0:49:21.470 ******* 2026-02-15 06:44:15.607811 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:44:15.607830 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:44:15.607848 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:44:15.607863 | orchestrator | 2026-02-15 06:44:15.607874 | orchestrator | TASK [Re-enable pg autoscale on pools] ***************************************** 2026-02-15 06:44:15.607885 | orchestrator | Sunday 15 February 2026 06:42:44 +0000 (0:00:01.411) 0:49:22.883 ******* 2026-02-15 06:44:15.607896 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.mgr', 'mode': 'on'}) 2026-02-15 06:44:15.607907 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_data', 'mode': 'on'}) 2026-02-15 06:44:15.607926 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'cephfs_metadata', 'mode': 'on'}) 2026-02-15 06:44:15.607961 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.data', 'mode': 'on'}) 2026-02-15 06:44:15.607981 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.buckets.index', 'mode': 'on'}) 2026-02-15 06:44:15.607996 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.control', 'mode': 'on'}) 2026-02-15 06:44:15.608007 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.log', 'mode': 'on'}) 2026-02-15 06:44:15.608018 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': 'default.rgw.meta', 'mode': 'on'}) 2026-02-15 06:44:15.608034 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'name': '.rgw.root', 'mode': 'on'}) 2026-02-15 06:44:15.608052 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'backups', 'mode': 'off'})  2026-02-15 06:44:15.608072 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'volumes', 'mode': 'off'})  2026-02-15 06:44:15.608091 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'images', 'mode': 'off'})  2026-02-15 06:44:15.608109 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'metrics', 'mode': 'off'})  2026-02-15 06:44:15.608122 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vms', 'mode': 'off'})  2026-02-15 06:44:15.608133 | orchestrator | 2026-02-15 06:44:15.608144 | orchestrator | TASK [Unset osd flags] ********************************************************* 2026-02-15 06:44:15.608154 | orchestrator | Sunday 15 February 2026 06:43:57 +0000 (0:01:12.983) 0:50:35.866 ******* 2026-02-15 06:44:15.608165 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=noout) 2026-02-15 06:44:15.608175 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=nodeep-scrub) 2026-02-15 06:44:15.608187 | orchestrator | 2026-02-15 06:44:15.608205 | orchestrator | TASK [Re-enable balancer] ****************************************************** 2026-02-15 06:44:15.608225 | orchestrator | Sunday 15 February 2026 06:44:03 +0000 (0:00:05.805) 0:50:41.672 ******* 2026-02-15 06:44:15.608242 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:44:15.608261 | orchestrator | 2026-02-15 06:44:15.608280 | orchestrator | PLAY [Upgrade ceph mdss cluster, deactivate all rank > 0] ********************** 2026-02-15 06:44:15.608299 | orchestrator | 2026-02-15 06:44:15.608317 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:44:15.608336 | orchestrator | Sunday 15 February 2026 06:44:06 +0000 (0:00:03.286) 0:50:44.959 ******* 2026-02-15 06:44:15.608355 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2026-02-15 06:44:15.608366 | orchestrator | 2026-02-15 06:44:15.608376 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:44:15.608387 | orchestrator | Sunday 15 February 2026 06:44:08 +0000 (0:00:01.165) 0:50:46.124 ******* 2026-02-15 06:44:15.608398 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:15.608416 | orchestrator | 2026-02-15 06:44:15.608435 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:44:15.608453 | orchestrator | Sunday 15 February 2026 06:44:09 +0000 (0:00:01.490) 0:50:47.614 ******* 2026-02-15 06:44:15.608472 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:15.608492 | orchestrator | 2026-02-15 06:44:15.608510 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:44:15.608528 | orchestrator | Sunday 15 February 2026 06:44:10 +0000 (0:00:01.129) 0:50:48.744 ******* 2026-02-15 06:44:15.608546 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:15.608557 | orchestrator | 2026-02-15 06:44:15.608567 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:44:15.608578 | orchestrator | Sunday 15 February 2026 06:44:12 +0000 (0:00:01.431) 0:50:50.176 ******* 2026-02-15 06:44:15.608589 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:15.608608 | orchestrator | 2026-02-15 06:44:15.608619 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:44:15.608629 | orchestrator | Sunday 15 February 2026 06:44:13 +0000 (0:00:01.191) 0:50:51.369 ******* 2026-02-15 06:44:15.608640 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:15.608653 | orchestrator | 2026-02-15 06:44:15.608672 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:44:15.608808 | orchestrator | Sunday 15 February 2026 06:44:14 +0000 (0:00:01.146) 0:50:52.516 ******* 2026-02-15 06:44:15.608856 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:15.608868 | orchestrator | 2026-02-15 06:44:15.608895 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:44:40.167744 | orchestrator | Sunday 15 February 2026 06:44:15 +0000 (0:00:01.181) 0:50:53.697 ******* 2026-02-15 06:44:40.167940 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:40.167976 | orchestrator | 2026-02-15 06:44:40.167994 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:44:40.168006 | orchestrator | Sunday 15 February 2026 06:44:16 +0000 (0:00:01.150) 0:50:54.848 ******* 2026-02-15 06:44:40.168017 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:40.168029 | orchestrator | 2026-02-15 06:44:40.168040 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:44:40.168051 | orchestrator | Sunday 15 February 2026 06:44:17 +0000 (0:00:01.160) 0:50:56.008 ******* 2026-02-15 06:44:40.168062 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:44:40.168073 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:44:40.168085 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:44:40.168096 | orchestrator | 2026-02-15 06:44:40.168106 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:44:40.168117 | orchestrator | Sunday 15 February 2026 06:44:19 +0000 (0:00:01.724) 0:50:57.733 ******* 2026-02-15 06:44:40.168128 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:40.168138 | orchestrator | 2026-02-15 06:44:40.168149 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:44:40.168160 | orchestrator | Sunday 15 February 2026 06:44:20 +0000 (0:00:01.335) 0:50:59.068 ******* 2026-02-15 06:44:40.168170 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:44:40.168181 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:44:40.168192 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:44:40.168202 | orchestrator | 2026-02-15 06:44:40.168221 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:44:40.168240 | orchestrator | Sunday 15 February 2026 06:44:23 +0000 (0:00:02.915) 0:51:01.984 ******* 2026-02-15 06:44:40.168258 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 06:44:40.168278 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 06:44:40.168297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 06:44:40.168318 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:40.168338 | orchestrator | 2026-02-15 06:44:40.168354 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:44:40.168367 | orchestrator | Sunday 15 February 2026 06:44:25 +0000 (0:00:01.532) 0:51:03.516 ******* 2026-02-15 06:44:40.168382 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:44:40.168398 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:44:40.168438 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:44:40.168451 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:40.168463 | orchestrator | 2026-02-15 06:44:40.168475 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:44:40.168488 | orchestrator | Sunday 15 February 2026 06:44:27 +0000 (0:00:01.736) 0:51:05.253 ******* 2026-02-15 06:44:40.168520 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:40.168537 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:40.168550 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:40.168585 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:40.168605 | orchestrator | 2026-02-15 06:44:40.168624 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:44:40.168643 | orchestrator | Sunday 15 February 2026 06:44:28 +0000 (0:00:01.228) 0:51:06.481 ******* 2026-02-15 06:44:40.168663 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:44:21.505016', 'end': '2026-02-15 06:44:21.552148', 'delta': '0:00:00.047132', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:44:40.168687 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:44:22.119892', 'end': '2026-02-15 06:44:22.160635', 'delta': '0:00:00.040743', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:44:40.168707 | orchestrator | ok: [testbed-node-0] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:44:22.662371', 'end': '2026-02-15 06:44:22.700736', 'delta': '0:00:00.038365', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:44:40.168733 | orchestrator | 2026-02-15 06:44:40.168744 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:44:40.168755 | orchestrator | Sunday 15 February 2026 06:44:29 +0000 (0:00:01.224) 0:51:07.705 ******* 2026-02-15 06:44:40.168766 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:40.168776 | orchestrator | 2026-02-15 06:44:40.168787 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:44:40.168798 | orchestrator | Sunday 15 February 2026 06:44:30 +0000 (0:00:01.307) 0:51:09.012 ******* 2026-02-15 06:44:40.168850 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:40.168862 | orchestrator | 2026-02-15 06:44:40.168873 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:44:40.168884 | orchestrator | Sunday 15 February 2026 06:44:32 +0000 (0:00:01.244) 0:51:10.257 ******* 2026-02-15 06:44:40.168894 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:40.168905 | orchestrator | 2026-02-15 06:44:40.168915 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:44:40.168926 | orchestrator | Sunday 15 February 2026 06:44:33 +0000 (0:00:01.182) 0:51:11.440 ******* 2026-02-15 06:44:40.168936 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:40.168950 | orchestrator | 2026-02-15 06:44:40.168968 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:44:40.168986 | orchestrator | Sunday 15 February 2026 06:44:35 +0000 (0:00:02.052) 0:51:13.493 ******* 2026-02-15 06:44:40.169004 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:40.169022 | orchestrator | 2026-02-15 06:44:40.169041 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:44:40.169061 | orchestrator | Sunday 15 February 2026 06:44:36 +0000 (0:00:01.123) 0:51:14.616 ******* 2026-02-15 06:44:40.169079 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:40.169095 | orchestrator | 2026-02-15 06:44:40.169106 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:44:40.169142 | orchestrator | Sunday 15 February 2026 06:44:37 +0000 (0:00:01.232) 0:51:15.849 ******* 2026-02-15 06:44:40.169175 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:40.169200 | orchestrator | 2026-02-15 06:44:40.169212 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:44:40.169222 | orchestrator | Sunday 15 February 2026 06:44:39 +0000 (0:00:01.268) 0:51:17.117 ******* 2026-02-15 06:44:40.169233 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:40.169244 | orchestrator | 2026-02-15 06:44:40.169264 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:44:49.972686 | orchestrator | Sunday 15 February 2026 06:44:40 +0000 (0:00:01.139) 0:51:18.257 ******* 2026-02-15 06:44:49.972807 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:49.972835 | orchestrator | 2026-02-15 06:44:49.972924 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:44:49.972946 | orchestrator | Sunday 15 February 2026 06:44:41 +0000 (0:00:01.285) 0:51:19.543 ******* 2026-02-15 06:44:49.972965 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:49.972984 | orchestrator | 2026-02-15 06:44:49.972996 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:44:49.973007 | orchestrator | Sunday 15 February 2026 06:44:42 +0000 (0:00:01.134) 0:51:20.677 ******* 2026-02-15 06:44:49.973018 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:49.973029 | orchestrator | 2026-02-15 06:44:49.973040 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:44:49.973081 | orchestrator | Sunday 15 February 2026 06:44:43 +0000 (0:00:01.217) 0:51:21.895 ******* 2026-02-15 06:44:49.973092 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:49.973103 | orchestrator | 2026-02-15 06:44:49.973114 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:44:49.973124 | orchestrator | Sunday 15 February 2026 06:44:44 +0000 (0:00:01.161) 0:51:23.058 ******* 2026-02-15 06:44:49.973135 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:49.973145 | orchestrator | 2026-02-15 06:44:49.973156 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:44:49.973167 | orchestrator | Sunday 15 February 2026 06:44:46 +0000 (0:00:01.167) 0:51:24.225 ******* 2026-02-15 06:44:49.973178 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:49.973188 | orchestrator | 2026-02-15 06:44:49.973199 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:44:49.973209 | orchestrator | Sunday 15 February 2026 06:44:47 +0000 (0:00:01.193) 0:51:25.419 ******* 2026-02-15 06:44:49.973223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:44:49.973238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:44:49.973249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:44:49.973278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:44:49.973293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:44:49.973304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:44:49.973336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:44:49.973362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37951a5f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:44:49.973382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:44:49.973394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:44:49.973405 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:49.973416 | orchestrator | 2026-02-15 06:44:49.973427 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:44:49.973438 | orchestrator | Sunday 15 February 2026 06:44:48 +0000 (0:00:01.373) 0:51:26.793 ******* 2026-02-15 06:44:49.973457 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:54.286808 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:54.286975 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:54.286991 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-38-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:54.287001 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:54.287024 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:54.287033 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:54.287080 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '37951a5f', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1', 'scsi-SQEMU_QEMU_HARDDISK_37951a5f-9a29-4d71-b98b-e7992be6d9db-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:54.287091 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:54.287104 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:44:54.287113 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:44:54.287123 | orchestrator | 2026-02-15 06:44:54.287132 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:44:54.287146 | orchestrator | Sunday 15 February 2026 06:44:49 +0000 (0:00:01.281) 0:51:28.074 ******* 2026-02-15 06:44:54.287155 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:54.287163 | orchestrator | 2026-02-15 06:44:54.287171 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:44:54.287179 | orchestrator | Sunday 15 February 2026 06:44:51 +0000 (0:00:01.540) 0:51:29.615 ******* 2026-02-15 06:44:54.287187 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:54.287195 | orchestrator | 2026-02-15 06:44:54.287203 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:44:54.287211 | orchestrator | Sunday 15 February 2026 06:44:52 +0000 (0:00:01.168) 0:51:30.783 ******* 2026-02-15 06:44:54.287219 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:44:54.287226 | orchestrator | 2026-02-15 06:44:54.287234 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:44:54.287247 | orchestrator | Sunday 15 February 2026 06:44:54 +0000 (0:00:01.601) 0:51:32.385 ******* 2026-02-15 06:45:46.961875 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:45:46.961990 | orchestrator | 2026-02-15 06:45:46.962007 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:45:46.962107 | orchestrator | Sunday 15 February 2026 06:44:55 +0000 (0:00:01.194) 0:51:33.580 ******* 2026-02-15 06:45:46.962120 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:45:46.962132 | orchestrator | 2026-02-15 06:45:46.962143 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:45:46.962154 | orchestrator | Sunday 15 February 2026 06:44:56 +0000 (0:00:01.276) 0:51:34.857 ******* 2026-02-15 06:45:46.962166 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:45:46.962177 | orchestrator | 2026-02-15 06:45:46.962198 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:45:46.962210 | orchestrator | Sunday 15 February 2026 06:44:57 +0000 (0:00:01.169) 0:51:36.026 ******* 2026-02-15 06:45:46.962221 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:45:46.962233 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-02-15 06:45:46.962243 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-02-15 06:45:46.962254 | orchestrator | 2026-02-15 06:45:46.962265 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:45:46.962276 | orchestrator | Sunday 15 February 2026 06:44:59 +0000 (0:00:01.793) 0:51:37.820 ******* 2026-02-15 06:45:46.962287 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-02-15 06:45:46.962299 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-02-15 06:45:46.962309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-02-15 06:45:46.962320 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:45:46.962331 | orchestrator | 2026-02-15 06:45:46.962342 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:45:46.962353 | orchestrator | Sunday 15 February 2026 06:45:00 +0000 (0:00:01.228) 0:51:39.049 ******* 2026-02-15 06:45:46.962364 | orchestrator | skipping: [testbed-node-0] 2026-02-15 06:45:46.962375 | orchestrator | 2026-02-15 06:45:46.962386 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:45:46.962397 | orchestrator | Sunday 15 February 2026 06:45:02 +0000 (0:00:01.146) 0:51:40.195 ******* 2026-02-15 06:45:46.962410 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:45:46.962422 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:45:46.962435 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:45:46.962447 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:45:46.962460 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:45:46.962472 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:45:46.962509 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:45:46.962521 | orchestrator | 2026-02-15 06:45:46.962533 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:45:46.962546 | orchestrator | Sunday 15 February 2026 06:45:04 +0000 (0:00:02.260) 0:51:42.456 ******* 2026-02-15 06:45:46.962558 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-02-15 06:45:46.962571 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:45:46.962583 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:45:46.962596 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:45:46.962608 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:45:46.962634 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:45:46.962646 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:45:46.962659 | orchestrator | 2026-02-15 06:45:46.962671 | orchestrator | TASK [Set max_mds 1 on ceph fs] ************************************************ 2026-02-15 06:45:46.962683 | orchestrator | Sunday 15 February 2026 06:45:07 +0000 (0:00:02.822) 0:51:45.278 ******* 2026-02-15 06:45:46.962695 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:45:46.962708 | orchestrator | 2026-02-15 06:45:46.962720 | orchestrator | TASK [Wait until only rank 0 is up] ******************************************** 2026-02-15 06:45:46.962732 | orchestrator | Sunday 15 February 2026 06:45:10 +0000 (0:00:03.163) 0:51:48.442 ******* 2026-02-15 06:45:46.962744 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:45:46.962757 | orchestrator | 2026-02-15 06:45:46.962769 | orchestrator | TASK [Get name of remaining active mds] **************************************** 2026-02-15 06:45:46.962782 | orchestrator | Sunday 15 February 2026 06:45:13 +0000 (0:00:03.018) 0:51:51.461 ******* 2026-02-15 06:45:46.962793 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:45:46.962804 | orchestrator | 2026-02-15 06:45:46.962814 | orchestrator | TASK [Set_fact mds_active_name] ************************************************ 2026-02-15 06:45:46.962825 | orchestrator | Sunday 15 February 2026 06:45:15 +0000 (0:00:02.212) 0:51:53.674 ******* 2026-02-15 06:45:46.962859 | orchestrator | ok: [testbed-node-0] => (item={'key': 'gid_14738', 'value': {'gid': 14738, 'name': 'testbed-node-4', 'rank': 0, 'incarnation': 4, 'state': 'up:active', 'state_seq': 2, 'addr': '192.168.16.14:6817/1807655974', 'addrs': {'addrvec': [{'type': 'v2', 'addr': '192.168.16.14:6816', 'nonce': 1807655974}, {'type': 'v1', 'addr': '192.168.16.14:6817', 'nonce': 1807655974}]}, 'join_fscid': -1, 'export_targets': [], 'features': 4540138322906710015, 'flags': 0, 'compat': {'compat': {}, 'ro_compat': {}, 'incompat': {'feature_1': 'base v0.20', 'feature_2': 'client writeable ranges', 'feature_3': 'default file layouts on dirs', 'feature_4': 'dir inode in separate object', 'feature_5': 'mds uses versioned encoding', 'feature_6': 'dirfrag is stored in omap', 'feature_7': 'mds uses inline data', 'feature_8': 'no anchor table', 'feature_9': 'file layout v2', 'feature_10': 'snaprealm v2'}}}}) 2026-02-15 06:45:46.962874 | orchestrator | 2026-02-15 06:45:46.962886 | orchestrator | TASK [Set_fact mds_active_host] ************************************************ 2026-02-15 06:45:46.962897 | orchestrator | Sunday 15 February 2026 06:45:16 +0000 (0:00:01.239) 0:51:54.913 ******* 2026-02-15 06:45:46.962907 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-02-15 06:45:46.962918 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-4) 2026-02-15 06:45:46.962929 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-02-15 06:45:46.962940 | orchestrator | 2026-02-15 06:45:46.962950 | orchestrator | TASK [Create standby_mdss group] *********************************************** 2026-02-15 06:45:46.962961 | orchestrator | Sunday 15 February 2026 06:45:18 +0000 (0:00:02.003) 0:51:56.916 ******* 2026-02-15 06:45:46.962971 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-5) 2026-02-15 06:45:46.962991 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-3) 2026-02-15 06:45:46.963002 | orchestrator | 2026-02-15 06:45:46.963012 | orchestrator | TASK [Stop standby ceph mds] *************************************************** 2026-02-15 06:45:46.963023 | orchestrator | Sunday 15 February 2026 06:45:20 +0000 (0:00:01.566) 0:51:58.483 ******* 2026-02-15 06:45:46.963033 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:45:46.963044 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:45:46.963055 | orchestrator | 2026-02-15 06:45:46.963065 | orchestrator | TASK [Mask systemd units for standby ceph mds] ********************************* 2026-02-15 06:45:46.963112 | orchestrator | Sunday 15 February 2026 06:45:28 +0000 (0:00:07.951) 0:52:06.435 ******* 2026-02-15 06:45:46.963125 | orchestrator | changed: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:45:46.963135 | orchestrator | changed: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:45:46.963146 | orchestrator | 2026-02-15 06:45:46.963156 | orchestrator | TASK [Wait until all standbys mds are stopped] ********************************* 2026-02-15 06:45:46.963167 | orchestrator | Sunday 15 February 2026 06:45:32 +0000 (0:00:03.784) 0:52:10.219 ******* 2026-02-15 06:45:46.963178 | orchestrator | ok: [testbed-node-0] 2026-02-15 06:45:46.963188 | orchestrator | 2026-02-15 06:45:46.963199 | orchestrator | TASK [Create active_mdss group] ************************************************ 2026-02-15 06:45:46.963210 | orchestrator | Sunday 15 February 2026 06:45:34 +0000 (0:00:02.123) 0:52:12.343 ******* 2026-02-15 06:45:46.963220 | orchestrator | changed: [testbed-node-0] 2026-02-15 06:45:46.963231 | orchestrator | 2026-02-15 06:45:46.963242 | orchestrator | PLAY [Upgrade active mds] ****************************************************** 2026-02-15 06:45:46.963253 | orchestrator | 2026-02-15 06:45:46.963264 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:45:46.963274 | orchestrator | Sunday 15 February 2026 06:45:35 +0000 (0:00:01.614) 0:52:13.958 ******* 2026-02-15 06:45:46.963285 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-15 06:45:46.963296 | orchestrator | 2026-02-15 06:45:46.963306 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:45:46.963317 | orchestrator | Sunday 15 February 2026 06:45:37 +0000 (0:00:01.163) 0:52:15.122 ******* 2026-02-15 06:45:46.963327 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:45:46.963338 | orchestrator | 2026-02-15 06:45:46.963354 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:45:46.963365 | orchestrator | Sunday 15 February 2026 06:45:38 +0000 (0:00:01.450) 0:52:16.573 ******* 2026-02-15 06:45:46.963376 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:45:46.963386 | orchestrator | 2026-02-15 06:45:46.963397 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:45:46.963407 | orchestrator | Sunday 15 February 2026 06:45:39 +0000 (0:00:01.127) 0:52:17.700 ******* 2026-02-15 06:45:46.963418 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:45:46.963429 | orchestrator | 2026-02-15 06:45:46.963439 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:45:46.963450 | orchestrator | Sunday 15 February 2026 06:45:41 +0000 (0:00:01.470) 0:52:19.171 ******* 2026-02-15 06:45:46.963460 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:45:46.963471 | orchestrator | 2026-02-15 06:45:46.963482 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:45:46.963492 | orchestrator | Sunday 15 February 2026 06:45:42 +0000 (0:00:01.219) 0:52:20.390 ******* 2026-02-15 06:45:46.963503 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:45:46.963513 | orchestrator | 2026-02-15 06:45:46.963524 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:45:46.963535 | orchestrator | Sunday 15 February 2026 06:45:43 +0000 (0:00:01.171) 0:52:21.562 ******* 2026-02-15 06:45:46.963546 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:45:46.963563 | orchestrator | 2026-02-15 06:45:46.963574 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:45:46.963585 | orchestrator | Sunday 15 February 2026 06:45:44 +0000 (0:00:01.180) 0:52:22.743 ******* 2026-02-15 06:45:46.963596 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:45:46.963606 | orchestrator | 2026-02-15 06:45:46.963617 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:45:46.963628 | orchestrator | Sunday 15 February 2026 06:45:45 +0000 (0:00:01.184) 0:52:23.927 ******* 2026-02-15 06:45:46.963638 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:45:46.963649 | orchestrator | 2026-02-15 06:45:46.963668 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:46:12.174718 | orchestrator | Sunday 15 February 2026 06:45:46 +0000 (0:00:01.125) 0:52:25.053 ******* 2026-02-15 06:46:12.174807 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:46:12.174817 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:46:12.174824 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:46:12.174830 | orchestrator | 2026-02-15 06:46:12.174837 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:46:12.174844 | orchestrator | Sunday 15 February 2026 06:45:48 +0000 (0:00:01.725) 0:52:26.778 ******* 2026-02-15 06:46:12.174850 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:12.174857 | orchestrator | 2026-02-15 06:46:12.174864 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:46:12.174870 | orchestrator | Sunday 15 February 2026 06:45:49 +0000 (0:00:01.310) 0:52:28.089 ******* 2026-02-15 06:46:12.174876 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:46:12.174882 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:46:12.174889 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:46:12.174895 | orchestrator | 2026-02-15 06:46:12.174901 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:46:12.174907 | orchestrator | Sunday 15 February 2026 06:45:52 +0000 (0:00:02.919) 0:52:31.008 ******* 2026-02-15 06:46:12.174914 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-15 06:46:12.174920 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-15 06:46:12.174926 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-15 06:46:12.174933 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:12.174939 | orchestrator | 2026-02-15 06:46:12.174945 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:46:12.174951 | orchestrator | Sunday 15 February 2026 06:45:54 +0000 (0:00:01.470) 0:52:32.479 ******* 2026-02-15 06:46:12.174959 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:46:12.174968 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:46:12.174975 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:46:12.174981 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:12.174987 | orchestrator | 2026-02-15 06:46:12.174993 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:46:12.175000 | orchestrator | Sunday 15 February 2026 06:45:56 +0000 (0:00:02.119) 0:52:34.600 ******* 2026-02-15 06:46:12.175037 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:12.175047 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:12.175053 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:12.175059 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:12.175066 | orchestrator | 2026-02-15 06:46:12.175072 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:46:12.175078 | orchestrator | Sunday 15 February 2026 06:45:57 +0000 (0:00:01.196) 0:52:35.797 ******* 2026-02-15 06:46:12.175099 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:45:50.572823', 'end': '2026-02-15 06:45:50.614792', 'delta': '0:00:00.041969', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:46:12.175107 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:45:51.141186', 'end': '2026-02-15 06:45:51.186183', 'delta': '0:00:00.044997', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:46:12.175128 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:45:51.700711', 'end': '2026-02-15 06:45:51.748695', 'delta': '0:00:00.047984', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:46:12.175140 | orchestrator | 2026-02-15 06:46:12.175146 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:46:12.175153 | orchestrator | Sunday 15 February 2026 06:45:58 +0000 (0:00:01.210) 0:52:37.007 ******* 2026-02-15 06:46:12.175167 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:12.175220 | orchestrator | 2026-02-15 06:46:12.175227 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:46:12.175233 | orchestrator | Sunday 15 February 2026 06:46:00 +0000 (0:00:01.244) 0:52:38.252 ******* 2026-02-15 06:46:12.175240 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:12.175246 | orchestrator | 2026-02-15 06:46:12.175252 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:46:12.175258 | orchestrator | Sunday 15 February 2026 06:46:01 +0000 (0:00:01.680) 0:52:39.933 ******* 2026-02-15 06:46:12.175268 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:12.175275 | orchestrator | 2026-02-15 06:46:12.175281 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:46:12.175287 | orchestrator | Sunday 15 February 2026 06:46:03 +0000 (0:00:01.254) 0:52:41.187 ******* 2026-02-15 06:46:12.175293 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:46:12.175299 | orchestrator | 2026-02-15 06:46:12.175305 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:46:12.175311 | orchestrator | Sunday 15 February 2026 06:46:05 +0000 (0:00:01.986) 0:52:43.174 ******* 2026-02-15 06:46:12.175318 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:12.175324 | orchestrator | 2026-02-15 06:46:12.175330 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:46:12.175336 | orchestrator | Sunday 15 February 2026 06:46:06 +0000 (0:00:01.174) 0:52:44.348 ******* 2026-02-15 06:46:12.175342 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:12.175348 | orchestrator | 2026-02-15 06:46:12.175354 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:46:12.175361 | orchestrator | Sunday 15 February 2026 06:46:07 +0000 (0:00:01.143) 0:52:45.491 ******* 2026-02-15 06:46:12.175367 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:12.175373 | orchestrator | 2026-02-15 06:46:12.175379 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:46:12.175385 | orchestrator | Sunday 15 February 2026 06:46:08 +0000 (0:00:01.297) 0:52:46.789 ******* 2026-02-15 06:46:12.175391 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:12.175397 | orchestrator | 2026-02-15 06:46:12.175403 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:46:12.175410 | orchestrator | Sunday 15 February 2026 06:46:09 +0000 (0:00:01.162) 0:52:47.952 ******* 2026-02-15 06:46:12.175416 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:12.175422 | orchestrator | 2026-02-15 06:46:12.175428 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:46:12.175434 | orchestrator | Sunday 15 February 2026 06:46:10 +0000 (0:00:01.138) 0:52:49.090 ******* 2026-02-15 06:46:12.175445 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:17.176133 | orchestrator | 2026-02-15 06:46:17.176295 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:46:17.176314 | orchestrator | Sunday 15 February 2026 06:46:12 +0000 (0:00:01.175) 0:52:50.266 ******* 2026-02-15 06:46:17.176326 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:17.176339 | orchestrator | 2026-02-15 06:46:17.176350 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:46:17.176361 | orchestrator | Sunday 15 February 2026 06:46:13 +0000 (0:00:01.176) 0:52:51.442 ******* 2026-02-15 06:46:17.176372 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:17.176384 | orchestrator | 2026-02-15 06:46:17.176394 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:46:17.176405 | orchestrator | Sunday 15 February 2026 06:46:14 +0000 (0:00:01.212) 0:52:52.655 ******* 2026-02-15 06:46:17.176415 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:17.176452 | orchestrator | 2026-02-15 06:46:17.176464 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:46:17.176475 | orchestrator | Sunday 15 February 2026 06:46:15 +0000 (0:00:01.150) 0:52:53.805 ******* 2026-02-15 06:46:17.176486 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:17.176497 | orchestrator | 2026-02-15 06:46:17.176508 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:46:17.176519 | orchestrator | Sunday 15 February 2026 06:46:16 +0000 (0:00:01.166) 0:52:54.971 ******* 2026-02-15 06:46:17.176532 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:46:17.176548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'uuids': ['65aea23d-0c6f-484a-a24c-521c476a1576'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k']}})  2026-02-15 06:46:17.176577 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7cc59cd1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:46:17.176591 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800']}})  2026-02-15 06:46:17.176604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:46:17.176634 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:46:17.176654 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:46:17.176669 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:46:17.176683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24', 'dm-uuid-CRYPT-LUKS2-d6fb5e45582d485d831faba7ab4bd3c7-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:46:17.176696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:46:17.176714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'uuids': ['d6fb5e45-582d-485d-831f-aba7ab4bd3c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24']}})  2026-02-15 06:46:17.176728 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee']}})  2026-02-15 06:46:17.176749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:46:18.990000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7713f0f4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:46:18.990177 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:46:18.990256 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:46:18.990273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k', 'dm-uuid-CRYPT-LUKS2-65aea23d0c6f484aa24c521c476a1576-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:46:18.990287 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:18.990300 | orchestrator | 2026-02-15 06:46:18.990312 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:46:18.990324 | orchestrator | Sunday 15 February 2026 06:46:18 +0000 (0:00:01.481) 0:52:56.453 ******* 2026-02-15 06:46:18.990379 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:18.990395 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'uuids': ['65aea23d-0c6f-484a-a24c-521c476a1576'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:18.990437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7cc59cd1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:18.990456 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:18.990469 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:18.990497 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:20.225635 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:20.225737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:20.225754 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24', 'dm-uuid-CRYPT-LUKS2-d6fb5e45582d485d831faba7ab4bd3c7-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:20.225782 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:20.225795 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'uuids': ['d6fb5e45-582d-485d-831f-aba7ab4bd3c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:20.225854 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:20.225870 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:20.225889 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7713f0f4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:20.225910 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:20.225930 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:56.073178 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k', 'dm-uuid-CRYPT-LUKS2-65aea23d0c6f484aa24c521c476a1576-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:46:56.073312 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.073410 | orchestrator | 2026-02-15 06:46:56.073425 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:46:56.073437 | orchestrator | Sunday 15 February 2026 06:46:20 +0000 (0:00:01.866) 0:52:58.319 ******* 2026-02-15 06:46:56.073448 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:56.073460 | orchestrator | 2026-02-15 06:46:56.073472 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:46:56.073483 | orchestrator | Sunday 15 February 2026 06:46:21 +0000 (0:00:01.540) 0:52:59.860 ******* 2026-02-15 06:46:56.073494 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:56.073504 | orchestrator | 2026-02-15 06:46:56.073515 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:46:56.073527 | orchestrator | Sunday 15 February 2026 06:46:22 +0000 (0:00:01.186) 0:53:01.047 ******* 2026-02-15 06:46:56.073538 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:56.073548 | orchestrator | 2026-02-15 06:46:56.073559 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:46:56.073571 | orchestrator | Sunday 15 February 2026 06:46:24 +0000 (0:00:01.472) 0:53:02.520 ******* 2026-02-15 06:46:56.073581 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.073592 | orchestrator | 2026-02-15 06:46:56.073603 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:46:56.073614 | orchestrator | Sunday 15 February 2026 06:46:25 +0000 (0:00:01.189) 0:53:03.709 ******* 2026-02-15 06:46:56.073625 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.073636 | orchestrator | 2026-02-15 06:46:56.073647 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:46:56.073684 | orchestrator | Sunday 15 February 2026 06:46:26 +0000 (0:00:01.289) 0:53:04.998 ******* 2026-02-15 06:46:56.073695 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.073706 | orchestrator | 2026-02-15 06:46:56.073717 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:46:56.073728 | orchestrator | Sunday 15 February 2026 06:46:28 +0000 (0:00:01.213) 0:53:06.212 ******* 2026-02-15 06:46:56.073738 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-15 06:46:56.073749 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-15 06:46:56.073760 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-15 06:46:56.073771 | orchestrator | 2026-02-15 06:46:56.073782 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:46:56.073793 | orchestrator | Sunday 15 February 2026 06:46:29 +0000 (0:00:01.729) 0:53:07.942 ******* 2026-02-15 06:46:56.073803 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-15 06:46:56.073814 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-15 06:46:56.073825 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-15 06:46:56.073836 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.073846 | orchestrator | 2026-02-15 06:46:56.073857 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:46:56.073868 | orchestrator | Sunday 15 February 2026 06:46:31 +0000 (0:00:01.313) 0:53:09.255 ******* 2026-02-15 06:46:56.073878 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-15 06:46:56.073890 | orchestrator | 2026-02-15 06:46:56.073901 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:46:56.073913 | orchestrator | Sunday 15 February 2026 06:46:32 +0000 (0:00:01.139) 0:53:10.395 ******* 2026-02-15 06:46:56.073924 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.073935 | orchestrator | 2026-02-15 06:46:56.073946 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:46:56.073956 | orchestrator | Sunday 15 February 2026 06:46:33 +0000 (0:00:01.203) 0:53:11.599 ******* 2026-02-15 06:46:56.073967 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.073978 | orchestrator | 2026-02-15 06:46:56.073988 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:46:56.073999 | orchestrator | Sunday 15 February 2026 06:46:34 +0000 (0:00:01.215) 0:53:12.815 ******* 2026-02-15 06:46:56.074010 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.074085 | orchestrator | 2026-02-15 06:46:56.074096 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:46:56.074107 | orchestrator | Sunday 15 February 2026 06:46:36 +0000 (0:00:01.395) 0:53:14.210 ******* 2026-02-15 06:46:56.074117 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:56.074128 | orchestrator | 2026-02-15 06:46:56.074139 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:46:56.074149 | orchestrator | Sunday 15 February 2026 06:46:37 +0000 (0:00:01.270) 0:53:15.481 ******* 2026-02-15 06:46:56.074173 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:46:56.074214 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:46:56.074234 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:46:56.074246 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.074256 | orchestrator | 2026-02-15 06:46:56.074267 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:46:56.074278 | orchestrator | Sunday 15 February 2026 06:46:38 +0000 (0:00:01.479) 0:53:16.960 ******* 2026-02-15 06:46:56.074289 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:46:56.074300 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:46:56.074311 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:46:56.074359 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.074372 | orchestrator | 2026-02-15 06:46:56.074383 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:46:56.074394 | orchestrator | Sunday 15 February 2026 06:46:40 +0000 (0:00:01.486) 0:53:18.447 ******* 2026-02-15 06:46:56.074405 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:46:56.074416 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:46:56.074427 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:46:56.074437 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.074448 | orchestrator | 2026-02-15 06:46:56.074459 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:46:56.074469 | orchestrator | Sunday 15 February 2026 06:46:41 +0000 (0:00:01.466) 0:53:19.914 ******* 2026-02-15 06:46:56.074480 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:56.074491 | orchestrator | 2026-02-15 06:46:56.074501 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:46:56.074512 | orchestrator | Sunday 15 February 2026 06:46:42 +0000 (0:00:01.150) 0:53:21.064 ******* 2026-02-15 06:46:56.074522 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-15 06:46:56.074533 | orchestrator | 2026-02-15 06:46:56.074543 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:46:56.074602 | orchestrator | Sunday 15 February 2026 06:46:44 +0000 (0:00:01.444) 0:53:22.509 ******* 2026-02-15 06:46:56.074614 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:46:56.074625 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:46:56.074636 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:46:56.074652 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:46:56.074663 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-15 06:46:56.074673 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:46:56.074684 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:46:56.074695 | orchestrator | 2026-02-15 06:46:56.074705 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:46:56.074716 | orchestrator | Sunday 15 February 2026 06:46:46 +0000 (0:00:02.411) 0:53:24.921 ******* 2026-02-15 06:46:56.074726 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:46:56.074737 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:46:56.074747 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:46:56.074758 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:46:56.074768 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-15 06:46:56.074779 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:46:56.074790 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:46:56.074800 | orchestrator | 2026-02-15 06:46:56.074811 | orchestrator | TASK [Prevent restart from the packaging] ************************************** 2026-02-15 06:46:56.074821 | orchestrator | Sunday 15 February 2026 06:46:49 +0000 (0:00:02.851) 0:53:27.772 ******* 2026-02-15 06:46:56.074832 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.074843 | orchestrator | 2026-02-15 06:46:56.074853 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:46:56.074864 | orchestrator | Sunday 15 February 2026 06:46:50 +0000 (0:00:01.166) 0:53:28.939 ******* 2026-02-15 06:46:56.074875 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-15 06:46:56.074893 | orchestrator | 2026-02-15 06:46:56.074904 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:46:56.074915 | orchestrator | Sunday 15 February 2026 06:46:51 +0000 (0:00:01.125) 0:53:30.065 ******* 2026-02-15 06:46:56.074925 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-15 06:46:56.074936 | orchestrator | 2026-02-15 06:46:56.074947 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:46:56.074957 | orchestrator | Sunday 15 February 2026 06:46:53 +0000 (0:00:01.366) 0:53:31.431 ******* 2026-02-15 06:46:56.074968 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:46:56.074978 | orchestrator | 2026-02-15 06:46:56.074989 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:46:56.074999 | orchestrator | Sunday 15 February 2026 06:46:54 +0000 (0:00:01.196) 0:53:32.628 ******* 2026-02-15 06:46:56.075010 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:46:56.075021 | orchestrator | 2026-02-15 06:46:56.075031 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:46:56.075051 | orchestrator | Sunday 15 February 2026 06:46:56 +0000 (0:00:01.533) 0:53:34.161 ******* 2026-02-15 06:47:47.322370 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.322560 | orchestrator | 2026-02-15 06:47:47.322583 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:47:47.322597 | orchestrator | Sunday 15 February 2026 06:46:57 +0000 (0:00:01.589) 0:53:35.751 ******* 2026-02-15 06:47:47.322609 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.322620 | orchestrator | 2026-02-15 06:47:47.322631 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:47:47.322642 | orchestrator | Sunday 15 February 2026 06:46:59 +0000 (0:00:01.682) 0:53:37.434 ******* 2026-02-15 06:47:47.322653 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.322666 | orchestrator | 2026-02-15 06:47:47.322676 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:47:47.322687 | orchestrator | Sunday 15 February 2026 06:47:00 +0000 (0:00:01.210) 0:53:38.644 ******* 2026-02-15 06:47:47.322698 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.322709 | orchestrator | 2026-02-15 06:47:47.322720 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:47:47.322731 | orchestrator | Sunday 15 February 2026 06:47:01 +0000 (0:00:01.138) 0:53:39.783 ******* 2026-02-15 06:47:47.322741 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.322752 | orchestrator | 2026-02-15 06:47:47.322763 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:47:47.322774 | orchestrator | Sunday 15 February 2026 06:47:02 +0000 (0:00:01.213) 0:53:40.997 ******* 2026-02-15 06:47:47.322784 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.322795 | orchestrator | 2026-02-15 06:47:47.322806 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:47:47.322817 | orchestrator | Sunday 15 February 2026 06:47:04 +0000 (0:00:01.591) 0:53:42.588 ******* 2026-02-15 06:47:47.322827 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.322838 | orchestrator | 2026-02-15 06:47:47.322849 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:47:47.322859 | orchestrator | Sunday 15 February 2026 06:47:06 +0000 (0:00:01.557) 0:53:44.146 ******* 2026-02-15 06:47:47.322870 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.322882 | orchestrator | 2026-02-15 06:47:47.322893 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:47:47.322904 | orchestrator | Sunday 15 February 2026 06:47:07 +0000 (0:00:01.201) 0:53:45.347 ******* 2026-02-15 06:47:47.322915 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.322926 | orchestrator | 2026-02-15 06:47:47.322937 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:47:47.322964 | orchestrator | Sunday 15 February 2026 06:47:08 +0000 (0:00:01.158) 0:53:46.505 ******* 2026-02-15 06:47:47.322999 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.323011 | orchestrator | 2026-02-15 06:47:47.323022 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:47:47.323033 | orchestrator | Sunday 15 February 2026 06:47:09 +0000 (0:00:01.307) 0:53:47.813 ******* 2026-02-15 06:47:47.323043 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.323054 | orchestrator | 2026-02-15 06:47:47.323065 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:47:47.323076 | orchestrator | Sunday 15 February 2026 06:47:10 +0000 (0:00:01.178) 0:53:48.991 ******* 2026-02-15 06:47:47.323087 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.323097 | orchestrator | 2026-02-15 06:47:47.323108 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:47:47.323119 | orchestrator | Sunday 15 February 2026 06:47:12 +0000 (0:00:01.163) 0:53:50.154 ******* 2026-02-15 06:47:47.323130 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323141 | orchestrator | 2026-02-15 06:47:47.323151 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:47:47.323162 | orchestrator | Sunday 15 February 2026 06:47:13 +0000 (0:00:01.117) 0:53:51.272 ******* 2026-02-15 06:47:47.323173 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323183 | orchestrator | 2026-02-15 06:47:47.323194 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:47:47.323205 | orchestrator | Sunday 15 February 2026 06:47:14 +0000 (0:00:01.168) 0:53:52.441 ******* 2026-02-15 06:47:47.323215 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323226 | orchestrator | 2026-02-15 06:47:47.323237 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:47:47.323247 | orchestrator | Sunday 15 February 2026 06:47:15 +0000 (0:00:01.129) 0:53:53.570 ******* 2026-02-15 06:47:47.323258 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.323269 | orchestrator | 2026-02-15 06:47:47.323279 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:47:47.323290 | orchestrator | Sunday 15 February 2026 06:47:16 +0000 (0:00:01.147) 0:53:54.718 ******* 2026-02-15 06:47:47.323300 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.323311 | orchestrator | 2026-02-15 06:47:47.323322 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:47:47.323332 | orchestrator | Sunday 15 February 2026 06:47:17 +0000 (0:00:01.199) 0:53:55.918 ******* 2026-02-15 06:47:47.323343 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323354 | orchestrator | 2026-02-15 06:47:47.323365 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:47:47.323375 | orchestrator | Sunday 15 February 2026 06:47:18 +0000 (0:00:01.107) 0:53:57.025 ******* 2026-02-15 06:47:47.323386 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323397 | orchestrator | 2026-02-15 06:47:47.323407 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:47:47.323418 | orchestrator | Sunday 15 February 2026 06:47:20 +0000 (0:00:01.119) 0:53:58.145 ******* 2026-02-15 06:47:47.323434 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323452 | orchestrator | 2026-02-15 06:47:47.323471 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:47:47.323489 | orchestrator | Sunday 15 February 2026 06:47:21 +0000 (0:00:01.140) 0:53:59.285 ******* 2026-02-15 06:47:47.323532 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323550 | orchestrator | 2026-02-15 06:47:47.323570 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:47:47.323610 | orchestrator | Sunday 15 February 2026 06:47:22 +0000 (0:00:01.132) 0:54:00.418 ******* 2026-02-15 06:47:47.323623 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323634 | orchestrator | 2026-02-15 06:47:47.323644 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:47:47.323655 | orchestrator | Sunday 15 February 2026 06:47:23 +0000 (0:00:01.224) 0:54:01.642 ******* 2026-02-15 06:47:47.323676 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323687 | orchestrator | 2026-02-15 06:47:47.323698 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:47:47.323708 | orchestrator | Sunday 15 February 2026 06:47:24 +0000 (0:00:01.202) 0:54:02.845 ******* 2026-02-15 06:47:47.323719 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323730 | orchestrator | 2026-02-15 06:47:47.323741 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:47:47.323752 | orchestrator | Sunday 15 February 2026 06:47:25 +0000 (0:00:01.172) 0:54:04.018 ******* 2026-02-15 06:47:47.323762 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323773 | orchestrator | 2026-02-15 06:47:47.323784 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:47:47.323795 | orchestrator | Sunday 15 February 2026 06:47:27 +0000 (0:00:01.179) 0:54:05.198 ******* 2026-02-15 06:47:47.323805 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323816 | orchestrator | 2026-02-15 06:47:47.323826 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:47:47.323837 | orchestrator | Sunday 15 February 2026 06:47:28 +0000 (0:00:01.147) 0:54:06.346 ******* 2026-02-15 06:47:47.323848 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323858 | orchestrator | 2026-02-15 06:47:47.323869 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:47:47.323879 | orchestrator | Sunday 15 February 2026 06:47:29 +0000 (0:00:01.121) 0:54:07.467 ******* 2026-02-15 06:47:47.323890 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323901 | orchestrator | 2026-02-15 06:47:47.323911 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:47:47.323922 | orchestrator | Sunday 15 February 2026 06:47:30 +0000 (0:00:01.167) 0:54:08.635 ******* 2026-02-15 06:47:47.323933 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.323944 | orchestrator | 2026-02-15 06:47:47.323955 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:47:47.323965 | orchestrator | Sunday 15 February 2026 06:47:31 +0000 (0:00:01.156) 0:54:09.792 ******* 2026-02-15 06:47:47.323976 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.323987 | orchestrator | 2026-02-15 06:47:47.324004 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:47:47.324015 | orchestrator | Sunday 15 February 2026 06:47:33 +0000 (0:00:01.933) 0:54:11.726 ******* 2026-02-15 06:47:47.324026 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.324037 | orchestrator | 2026-02-15 06:47:47.324047 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:47:47.324058 | orchestrator | Sunday 15 February 2026 06:47:35 +0000 (0:00:02.246) 0:54:13.972 ******* 2026-02-15 06:47:47.324069 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-15 06:47:47.324080 | orchestrator | 2026-02-15 06:47:47.324091 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:47:47.324102 | orchestrator | Sunday 15 February 2026 06:47:37 +0000 (0:00:01.144) 0:54:15.117 ******* 2026-02-15 06:47:47.324112 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.324123 | orchestrator | 2026-02-15 06:47:47.324133 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:47:47.324144 | orchestrator | Sunday 15 February 2026 06:47:38 +0000 (0:00:01.140) 0:54:16.258 ******* 2026-02-15 06:47:47.324155 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.324166 | orchestrator | 2026-02-15 06:47:47.324176 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:47:47.324187 | orchestrator | Sunday 15 February 2026 06:47:39 +0000 (0:00:01.181) 0:54:17.440 ******* 2026-02-15 06:47:47.324197 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:47:47.324208 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:47:47.324226 | orchestrator | 2026-02-15 06:47:47.324237 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:47:47.324248 | orchestrator | Sunday 15 February 2026 06:47:41 +0000 (0:00:01.863) 0:54:19.303 ******* 2026-02-15 06:47:47.324258 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:47:47.324269 | orchestrator | 2026-02-15 06:47:47.324279 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:47:47.324290 | orchestrator | Sunday 15 February 2026 06:47:42 +0000 (0:00:01.480) 0:54:20.784 ******* 2026-02-15 06:47:47.324300 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.324311 | orchestrator | 2026-02-15 06:47:47.324322 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:47:47.324332 | orchestrator | Sunday 15 February 2026 06:47:43 +0000 (0:00:01.191) 0:54:21.976 ******* 2026-02-15 06:47:47.324343 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.324354 | orchestrator | 2026-02-15 06:47:47.324364 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:47:47.324375 | orchestrator | Sunday 15 February 2026 06:47:45 +0000 (0:00:01.154) 0:54:23.130 ******* 2026-02-15 06:47:47.324386 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:47:47.324396 | orchestrator | 2026-02-15 06:47:47.324407 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:47:47.324417 | orchestrator | Sunday 15 February 2026 06:47:46 +0000 (0:00:01.142) 0:54:24.272 ******* 2026-02-15 06:47:47.324428 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-15 06:47:47.324438 | orchestrator | 2026-02-15 06:47:47.324449 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:47:47.324467 | orchestrator | Sunday 15 February 2026 06:47:47 +0000 (0:00:01.142) 0:54:25.414 ******* 2026-02-15 06:48:34.564192 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:48:34.564311 | orchestrator | 2026-02-15 06:48:34.564328 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:48:34.564342 | orchestrator | Sunday 15 February 2026 06:47:49 +0000 (0:00:01.718) 0:54:27.133 ******* 2026-02-15 06:48:34.564354 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:48:34.564365 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:48:34.564376 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:48:34.564387 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.564399 | orchestrator | 2026-02-15 06:48:34.564410 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:48:34.564421 | orchestrator | Sunday 15 February 2026 06:47:50 +0000 (0:00:01.162) 0:54:28.296 ******* 2026-02-15 06:48:34.564432 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.564443 | orchestrator | 2026-02-15 06:48:34.564453 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:48:34.564464 | orchestrator | Sunday 15 February 2026 06:47:51 +0000 (0:00:01.154) 0:54:29.451 ******* 2026-02-15 06:48:34.564475 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.564485 | orchestrator | 2026-02-15 06:48:34.564496 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:48:34.564507 | orchestrator | Sunday 15 February 2026 06:47:52 +0000 (0:00:01.220) 0:54:30.672 ******* 2026-02-15 06:48:34.564517 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.564528 | orchestrator | 2026-02-15 06:48:34.564539 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:48:34.564549 | orchestrator | Sunday 15 February 2026 06:47:53 +0000 (0:00:01.211) 0:54:31.883 ******* 2026-02-15 06:48:34.564560 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.564571 | orchestrator | 2026-02-15 06:48:34.564581 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:48:34.564592 | orchestrator | Sunday 15 February 2026 06:47:55 +0000 (0:00:01.241) 0:54:33.124 ******* 2026-02-15 06:48:34.564627 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.564714 | orchestrator | 2026-02-15 06:48:34.564728 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:48:34.564741 | orchestrator | Sunday 15 February 2026 06:47:56 +0000 (0:00:01.209) 0:54:34.334 ******* 2026-02-15 06:48:34.564754 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:48:34.564766 | orchestrator | 2026-02-15 06:48:34.564793 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:48:34.564807 | orchestrator | Sunday 15 February 2026 06:47:58 +0000 (0:00:02.501) 0:54:36.836 ******* 2026-02-15 06:48:34.564819 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:48:34.564833 | orchestrator | 2026-02-15 06:48:34.564845 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:48:34.564856 | orchestrator | Sunday 15 February 2026 06:47:59 +0000 (0:00:01.153) 0:54:37.989 ******* 2026-02-15 06:48:34.564867 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-15 06:48:34.564878 | orchestrator | 2026-02-15 06:48:34.564888 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:48:34.564899 | orchestrator | Sunday 15 February 2026 06:48:01 +0000 (0:00:01.148) 0:54:39.137 ******* 2026-02-15 06:48:34.564910 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.564921 | orchestrator | 2026-02-15 06:48:34.564931 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:48:34.564942 | orchestrator | Sunday 15 February 2026 06:48:02 +0000 (0:00:01.159) 0:54:40.297 ******* 2026-02-15 06:48:34.564952 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.564963 | orchestrator | 2026-02-15 06:48:34.564974 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:48:34.564985 | orchestrator | Sunday 15 February 2026 06:48:03 +0000 (0:00:01.189) 0:54:41.486 ******* 2026-02-15 06:48:34.564996 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565006 | orchestrator | 2026-02-15 06:48:34.565017 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:48:34.565028 | orchestrator | Sunday 15 February 2026 06:48:04 +0000 (0:00:01.175) 0:54:42.661 ******* 2026-02-15 06:48:34.565038 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565049 | orchestrator | 2026-02-15 06:48:34.565060 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:48:34.565070 | orchestrator | Sunday 15 February 2026 06:48:05 +0000 (0:00:01.196) 0:54:43.858 ******* 2026-02-15 06:48:34.565095 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565106 | orchestrator | 2026-02-15 06:48:34.565127 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:48:34.565138 | orchestrator | Sunday 15 February 2026 06:48:06 +0000 (0:00:01.134) 0:54:44.993 ******* 2026-02-15 06:48:34.565148 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565159 | orchestrator | 2026-02-15 06:48:34.565170 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:48:34.565180 | orchestrator | Sunday 15 February 2026 06:48:08 +0000 (0:00:01.178) 0:54:46.172 ******* 2026-02-15 06:48:34.565191 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565202 | orchestrator | 2026-02-15 06:48:34.565212 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:48:34.565223 | orchestrator | Sunday 15 February 2026 06:48:09 +0000 (0:00:01.148) 0:54:47.321 ******* 2026-02-15 06:48:34.565233 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565244 | orchestrator | 2026-02-15 06:48:34.565255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:48:34.565265 | orchestrator | Sunday 15 February 2026 06:48:10 +0000 (0:00:01.254) 0:54:48.576 ******* 2026-02-15 06:48:34.565276 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:48:34.565286 | orchestrator | 2026-02-15 06:48:34.565297 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:48:34.565335 | orchestrator | Sunday 15 February 2026 06:48:11 +0000 (0:00:01.147) 0:54:49.723 ******* 2026-02-15 06:48:34.565347 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-15 06:48:34.565359 | orchestrator | 2026-02-15 06:48:34.565369 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:48:34.565380 | orchestrator | Sunday 15 February 2026 06:48:12 +0000 (0:00:01.130) 0:54:50.853 ******* 2026-02-15 06:48:34.565391 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-15 06:48:34.565402 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-15 06:48:34.565413 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-15 06:48:34.565424 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-15 06:48:34.565435 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-15 06:48:34.565446 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-15 06:48:34.565456 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-15 06:48:34.565467 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:48:34.565478 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:48:34.565488 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:48:34.565499 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:48:34.565510 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:48:34.565521 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:48:34.565531 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:48:34.565542 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-15 06:48:34.565552 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-15 06:48:34.565563 | orchestrator | 2026-02-15 06:48:34.565574 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:48:34.565584 | orchestrator | Sunday 15 February 2026 06:48:19 +0000 (0:00:06.612) 0:54:57.466 ******* 2026-02-15 06:48:34.565595 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-15 06:48:34.565606 | orchestrator | 2026-02-15 06:48:34.565616 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-15 06:48:34.565632 | orchestrator | Sunday 15 February 2026 06:48:20 +0000 (0:00:01.101) 0:54:58.567 ******* 2026-02-15 06:48:34.565661 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 06:48:34.565675 | orchestrator | 2026-02-15 06:48:34.565686 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-15 06:48:34.565696 | orchestrator | Sunday 15 February 2026 06:48:22 +0000 (0:00:01.577) 0:55:00.145 ******* 2026-02-15 06:48:34.565707 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 06:48:34.565718 | orchestrator | 2026-02-15 06:48:34.565728 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:48:34.565739 | orchestrator | Sunday 15 February 2026 06:48:24 +0000 (0:00:02.046) 0:55:02.191 ******* 2026-02-15 06:48:34.565749 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565760 | orchestrator | 2026-02-15 06:48:34.565770 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:48:34.565781 | orchestrator | Sunday 15 February 2026 06:48:25 +0000 (0:00:01.136) 0:55:03.328 ******* 2026-02-15 06:48:34.565792 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565802 | orchestrator | 2026-02-15 06:48:34.565813 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:48:34.565824 | orchestrator | Sunday 15 February 2026 06:48:26 +0000 (0:00:01.169) 0:55:04.497 ******* 2026-02-15 06:48:34.565842 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565852 | orchestrator | 2026-02-15 06:48:34.565863 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:48:34.565874 | orchestrator | Sunday 15 February 2026 06:48:27 +0000 (0:00:01.160) 0:55:05.658 ******* 2026-02-15 06:48:34.565885 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565896 | orchestrator | 2026-02-15 06:48:34.565906 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:48:34.565917 | orchestrator | Sunday 15 February 2026 06:48:28 +0000 (0:00:01.198) 0:55:06.856 ******* 2026-02-15 06:48:34.565928 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565938 | orchestrator | 2026-02-15 06:48:34.565949 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:48:34.565960 | orchestrator | Sunday 15 February 2026 06:48:29 +0000 (0:00:01.154) 0:55:08.011 ******* 2026-02-15 06:48:34.565971 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.565981 | orchestrator | 2026-02-15 06:48:34.565992 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:48:34.566003 | orchestrator | Sunday 15 February 2026 06:48:31 +0000 (0:00:01.155) 0:55:09.166 ******* 2026-02-15 06:48:34.566013 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.566079 | orchestrator | 2026-02-15 06:48:34.566090 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:48:34.566101 | orchestrator | Sunday 15 February 2026 06:48:32 +0000 (0:00:01.122) 0:55:10.289 ******* 2026-02-15 06:48:34.566111 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.566122 | orchestrator | 2026-02-15 06:48:34.566133 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:48:34.566144 | orchestrator | Sunday 15 February 2026 06:48:33 +0000 (0:00:01.148) 0:55:11.437 ******* 2026-02-15 06:48:34.566155 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:48:34.566165 | orchestrator | 2026-02-15 06:48:34.566183 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:49:30.312667 | orchestrator | Sunday 15 February 2026 06:48:34 +0000 (0:00:01.214) 0:55:12.651 ******* 2026-02-15 06:49:30.312789 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.312861 | orchestrator | 2026-02-15 06:49:30.312874 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:49:30.312885 | orchestrator | Sunday 15 February 2026 06:48:35 +0000 (0:00:01.176) 0:55:13.828 ******* 2026-02-15 06:49:30.312897 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.312908 | orchestrator | 2026-02-15 06:49:30.312919 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:49:30.312930 | orchestrator | Sunday 15 February 2026 06:48:36 +0000 (0:00:01.146) 0:55:14.974 ******* 2026-02-15 06:49:30.312941 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-15 06:49:30.312952 | orchestrator | 2026-02-15 06:49:30.312963 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:49:30.312974 | orchestrator | Sunday 15 February 2026 06:48:41 +0000 (0:00:04.416) 0:55:19.391 ******* 2026-02-15 06:49:30.312986 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 06:49:30.312999 | orchestrator | 2026-02-15 06:49:30.313010 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:49:30.313021 | orchestrator | Sunday 15 February 2026 06:48:42 +0000 (0:00:01.185) 0:55:20.576 ******* 2026-02-15 06:49:30.313034 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-15 06:49:30.313089 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-15 06:49:30.313104 | orchestrator | 2026-02-15 06:49:30.313115 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:49:30.313126 | orchestrator | Sunday 15 February 2026 06:48:47 +0000 (0:00:04.790) 0:55:25.367 ******* 2026-02-15 06:49:30.313136 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.313148 | orchestrator | 2026-02-15 06:49:30.313159 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:49:30.313169 | orchestrator | Sunday 15 February 2026 06:48:48 +0000 (0:00:01.166) 0:55:26.533 ******* 2026-02-15 06:49:30.313180 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.313191 | orchestrator | 2026-02-15 06:49:30.313202 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:49:30.313213 | orchestrator | Sunday 15 February 2026 06:48:49 +0000 (0:00:01.136) 0:55:27.670 ******* 2026-02-15 06:49:30.313224 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.313234 | orchestrator | 2026-02-15 06:49:30.313245 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:49:30.313256 | orchestrator | Sunday 15 February 2026 06:48:50 +0000 (0:00:01.197) 0:55:28.867 ******* 2026-02-15 06:49:30.313266 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.313277 | orchestrator | 2026-02-15 06:49:30.313288 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:49:30.313299 | orchestrator | Sunday 15 February 2026 06:48:51 +0000 (0:00:01.219) 0:55:30.087 ******* 2026-02-15 06:49:30.313310 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.313320 | orchestrator | 2026-02-15 06:49:30.313331 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:49:30.313342 | orchestrator | Sunday 15 February 2026 06:48:53 +0000 (0:00:01.190) 0:55:31.277 ******* 2026-02-15 06:49:30.313353 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:49:30.313365 | orchestrator | 2026-02-15 06:49:30.313376 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:49:30.313387 | orchestrator | Sunday 15 February 2026 06:48:54 +0000 (0:00:01.302) 0:55:32.580 ******* 2026-02-15 06:49:30.313397 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:49:30.313409 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:49:30.313420 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:49:30.313431 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.313441 | orchestrator | 2026-02-15 06:49:30.313452 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:49:30.313463 | orchestrator | Sunday 15 February 2026 06:48:55 +0000 (0:00:01.514) 0:55:34.094 ******* 2026-02-15 06:49:30.313474 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:49:30.313485 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:49:30.313496 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:49:30.313507 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.313517 | orchestrator | 2026-02-15 06:49:30.313528 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:49:30.313539 | orchestrator | Sunday 15 February 2026 06:48:57 +0000 (0:00:01.497) 0:55:35.592 ******* 2026-02-15 06:49:30.313550 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 06:49:30.313561 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 06:49:30.313572 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 06:49:30.313597 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.313617 | orchestrator | 2026-02-15 06:49:30.313628 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:49:30.313639 | orchestrator | Sunday 15 February 2026 06:48:58 +0000 (0:00:01.425) 0:55:37.017 ******* 2026-02-15 06:49:30.313650 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:49:30.313660 | orchestrator | 2026-02-15 06:49:30.313671 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:49:30.313681 | orchestrator | Sunday 15 February 2026 06:49:00 +0000 (0:00:01.143) 0:55:38.160 ******* 2026-02-15 06:49:30.313692 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-15 06:49:30.313703 | orchestrator | 2026-02-15 06:49:30.313713 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:49:30.313724 | orchestrator | Sunday 15 February 2026 06:49:01 +0000 (0:00:01.365) 0:55:39.526 ******* 2026-02-15 06:49:30.313735 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:49:30.313745 | orchestrator | 2026-02-15 06:49:30.313756 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-15 06:49:30.313767 | orchestrator | Sunday 15 February 2026 06:49:03 +0000 (0:00:01.803) 0:55:41.329 ******* 2026-02-15 06:49:30.313777 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.313788 | orchestrator | 2026-02-15 06:49:30.313819 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-15 06:49:30.313830 | orchestrator | Sunday 15 February 2026 06:49:04 +0000 (0:00:01.129) 0:55:42.458 ******* 2026-02-15 06:49:30.313841 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-4 2026-02-15 06:49:30.313851 | orchestrator | 2026-02-15 06:49:30.313862 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-15 06:49:30.313873 | orchestrator | Sunday 15 February 2026 06:49:06 +0000 (0:00:01.675) 0:55:44.134 ******* 2026-02-15 06:49:30.313883 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-15 06:49:30.313894 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-02-15 06:49:30.313905 | orchestrator | 2026-02-15 06:49:30.313915 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-15 06:49:30.313926 | orchestrator | Sunday 15 February 2026 06:49:07 +0000 (0:00:01.855) 0:55:45.990 ******* 2026-02-15 06:49:30.313936 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:49:30.313953 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-15 06:49:30.313964 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 06:49:30.313975 | orchestrator | 2026-02-15 06:49:30.313985 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-15 06:49:30.313996 | orchestrator | Sunday 15 February 2026 06:49:11 +0000 (0:00:03.267) 0:55:49.257 ******* 2026-02-15 06:49:30.314007 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-15 06:49:30.314061 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-15 06:49:30.314074 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:49:30.314085 | orchestrator | 2026-02-15 06:49:30.314095 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-15 06:49:30.314106 | orchestrator | Sunday 15 February 2026 06:49:13 +0000 (0:00:01.996) 0:55:51.254 ******* 2026-02-15 06:49:30.314116 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:49:30.314127 | orchestrator | 2026-02-15 06:49:30.314138 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-15 06:49:30.314149 | orchestrator | Sunday 15 February 2026 06:49:14 +0000 (0:00:01.472) 0:55:52.726 ******* 2026-02-15 06:49:30.314159 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:49:30.314170 | orchestrator | 2026-02-15 06:49:30.314181 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-15 06:49:30.314192 | orchestrator | Sunday 15 February 2026 06:49:15 +0000 (0:00:01.195) 0:55:53.922 ******* 2026-02-15 06:49:30.314202 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-4 2026-02-15 06:49:30.314222 | orchestrator | 2026-02-15 06:49:30.314233 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-15 06:49:30.314243 | orchestrator | Sunday 15 February 2026 06:49:17 +0000 (0:00:01.443) 0:55:55.365 ******* 2026-02-15 06:49:30.314254 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-4 2026-02-15 06:49:30.314264 | orchestrator | 2026-02-15 06:49:30.314275 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-15 06:49:30.314286 | orchestrator | Sunday 15 February 2026 06:49:18 +0000 (0:00:01.508) 0:55:56.874 ******* 2026-02-15 06:49:30.314296 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:49:30.314307 | orchestrator | 2026-02-15 06:49:30.314317 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-15 06:49:30.314328 | orchestrator | Sunday 15 February 2026 06:49:20 +0000 (0:00:02.059) 0:55:58.933 ******* 2026-02-15 06:49:30.314338 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:49:30.314349 | orchestrator | 2026-02-15 06:49:30.314360 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-15 06:49:30.314370 | orchestrator | Sunday 15 February 2026 06:49:22 +0000 (0:00:01.963) 0:56:00.897 ******* 2026-02-15 06:49:30.314381 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:49:30.314391 | orchestrator | 2026-02-15 06:49:30.314402 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-15 06:49:30.314412 | orchestrator | Sunday 15 February 2026 06:49:25 +0000 (0:00:02.279) 0:56:03.176 ******* 2026-02-15 06:49:30.314423 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:49:30.314434 | orchestrator | 2026-02-15 06:49:30.314444 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-15 06:49:30.314455 | orchestrator | Sunday 15 February 2026 06:49:27 +0000 (0:00:02.432) 0:56:05.609 ******* 2026-02-15 06:49:30.314466 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:49:30.314476 | orchestrator | 2026-02-15 06:49:30.314487 | orchestrator | TASK [Restart ceph mds] ******************************************************** 2026-02-15 06:49:30.314497 | orchestrator | Sunday 15 February 2026 06:49:29 +0000 (0:00:01.666) 0:56:07.275 ******* 2026-02-15 06:49:30.314517 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:50:05.335040 | orchestrator | 2026-02-15 06:50:05.335160 | orchestrator | TASK [Restart active mds] ****************************************************** 2026-02-15 06:50:05.335177 | orchestrator | Sunday 15 February 2026 06:49:30 +0000 (0:00:01.130) 0:56:08.406 ******* 2026-02-15 06:50:05.335189 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:50:05.335201 | orchestrator | 2026-02-15 06:50:05.335213 | orchestrator | PLAY [Upgrade standbys ceph mdss cluster] ************************************** 2026-02-15 06:50:05.335224 | orchestrator | 2026-02-15 06:50:05.335324 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:50:05.335338 | orchestrator | Sunday 15 February 2026 06:49:41 +0000 (0:00:10.806) 0:56:19.212 ******* 2026-02-15 06:50:05.335349 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5, testbed-node-3 2026-02-15 06:50:05.335361 | orchestrator | 2026-02-15 06:50:05.335372 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:50:05.335384 | orchestrator | Sunday 15 February 2026 06:49:42 +0000 (0:00:01.266) 0:56:20.479 ******* 2026-02-15 06:50:05.335395 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:05.335406 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:05.335417 | orchestrator | 2026-02-15 06:50:05.335428 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:50:05.335439 | orchestrator | Sunday 15 February 2026 06:49:44 +0000 (0:00:01.652) 0:56:22.132 ******* 2026-02-15 06:50:05.335450 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:05.335461 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:05.335472 | orchestrator | 2026-02-15 06:50:05.335483 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:50:05.335494 | orchestrator | Sunday 15 February 2026 06:49:45 +0000 (0:00:01.643) 0:56:23.775 ******* 2026-02-15 06:50:05.335505 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:05.335542 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:05.335557 | orchestrator | 2026-02-15 06:50:05.335570 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:50:05.335583 | orchestrator | Sunday 15 February 2026 06:49:47 +0000 (0:00:01.585) 0:56:25.361 ******* 2026-02-15 06:50:05.335596 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:05.335608 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:05.335621 | orchestrator | 2026-02-15 06:50:05.335634 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:50:05.335646 | orchestrator | Sunday 15 February 2026 06:49:48 +0000 (0:00:01.242) 0:56:26.604 ******* 2026-02-15 06:50:05.335674 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:05.335687 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:05.335699 | orchestrator | 2026-02-15 06:50:05.335713 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:50:05.335725 | orchestrator | Sunday 15 February 2026 06:49:49 +0000 (0:00:01.209) 0:56:27.813 ******* 2026-02-15 06:50:05.335738 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:05.335750 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:05.335763 | orchestrator | 2026-02-15 06:50:05.335776 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:50:05.335790 | orchestrator | Sunday 15 February 2026 06:49:50 +0000 (0:00:01.263) 0:56:29.076 ******* 2026-02-15 06:50:05.335803 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:05.335817 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:05.335830 | orchestrator | 2026-02-15 06:50:05.335843 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:50:05.335856 | orchestrator | Sunday 15 February 2026 06:49:52 +0000 (0:00:01.294) 0:56:30.371 ******* 2026-02-15 06:50:05.335868 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:05.335914 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:05.335927 | orchestrator | 2026-02-15 06:50:05.335938 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:50:05.335949 | orchestrator | Sunday 15 February 2026 06:49:53 +0000 (0:00:01.372) 0:56:31.744 ******* 2026-02-15 06:50:05.335959 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:50:05.335970 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:50:05.335981 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:50:05.335991 | orchestrator | 2026-02-15 06:50:05.336002 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:50:05.336013 | orchestrator | Sunday 15 February 2026 06:49:55 +0000 (0:00:01.817) 0:56:33.561 ******* 2026-02-15 06:50:05.336023 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:05.336034 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:05.336044 | orchestrator | 2026-02-15 06:50:05.336055 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:50:05.336066 | orchestrator | Sunday 15 February 2026 06:49:56 +0000 (0:00:01.411) 0:56:34.973 ******* 2026-02-15 06:50:05.336076 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:50:05.336087 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:50:05.336097 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:50:05.336108 | orchestrator | 2026-02-15 06:50:05.336118 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:50:05.336129 | orchestrator | Sunday 15 February 2026 06:49:59 +0000 (0:00:02.874) 0:56:37.847 ******* 2026-02-15 06:50:05.336140 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-15 06:50:05.336151 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-15 06:50:05.336162 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-15 06:50:05.336173 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:05.336192 | orchestrator | 2026-02-15 06:50:05.336203 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:50:05.336214 | orchestrator | Sunday 15 February 2026 06:50:01 +0000 (0:00:01.456) 0:56:39.303 ******* 2026-02-15 06:50:05.336246 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:50:05.336261 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:50:05.336272 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:50:05.336283 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:05.336294 | orchestrator | 2026-02-15 06:50:05.336306 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:50:05.336317 | orchestrator | Sunday 15 February 2026 06:50:02 +0000 (0:00:01.691) 0:56:40.995 ******* 2026-02-15 06:50:05.336330 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:05.336350 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:05.336362 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:05.336373 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:05.336384 | orchestrator | 2026-02-15 06:50:05.336394 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:50:05.336405 | orchestrator | Sunday 15 February 2026 06:50:04 +0000 (0:00:01.216) 0:56:42.212 ******* 2026-02-15 06:50:05.336418 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:49:57.430910', 'end': '2026-02-15 06:49:57.476601', 'delta': '0:00:00.045691', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:50:05.336432 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:49:58.000325', 'end': '2026-02-15 06:49:58.036734', 'delta': '0:00:00.036409', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:50:05.336459 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:49:58.548076', 'end': '2026-02-15 06:49:58.598244', 'delta': '0:00:00.050168', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:50:24.862467 | orchestrator | 2026-02-15 06:50:24.862555 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:50:24.862565 | orchestrator | Sunday 15 February 2026 06:50:05 +0000 (0:00:01.210) 0:56:43.422 ******* 2026-02-15 06:50:24.862571 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:24.862578 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:24.862583 | orchestrator | 2026-02-15 06:50:24.862589 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:50:24.862595 | orchestrator | Sunday 15 February 2026 06:50:06 +0000 (0:00:01.431) 0:56:44.854 ******* 2026-02-15 06:50:24.862601 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:24.862607 | orchestrator | 2026-02-15 06:50:24.862613 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:50:24.862618 | orchestrator | Sunday 15 February 2026 06:50:08 +0000 (0:00:01.292) 0:56:46.147 ******* 2026-02-15 06:50:24.862624 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:24.862629 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:24.862635 | orchestrator | 2026-02-15 06:50:24.862640 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:50:24.862646 | orchestrator | Sunday 15 February 2026 06:50:09 +0000 (0:00:01.285) 0:56:47.432 ******* 2026-02-15 06:50:24.862651 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:50:24.862657 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:50:24.862662 | orchestrator | 2026-02-15 06:50:24.862668 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:50:24.862673 | orchestrator | Sunday 15 February 2026 06:50:11 +0000 (0:00:02.552) 0:56:49.985 ******* 2026-02-15 06:50:24.862689 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:24.862695 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:24.862700 | orchestrator | 2026-02-15 06:50:24.862706 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:50:24.862711 | orchestrator | Sunday 15 February 2026 06:50:13 +0000 (0:00:01.360) 0:56:51.346 ******* 2026-02-15 06:50:24.862717 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:24.862722 | orchestrator | 2026-02-15 06:50:24.862727 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:50:24.862733 | orchestrator | Sunday 15 February 2026 06:50:14 +0000 (0:00:01.143) 0:56:52.489 ******* 2026-02-15 06:50:24.862738 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:24.862753 | orchestrator | 2026-02-15 06:50:24.862759 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:50:24.862764 | orchestrator | Sunday 15 February 2026 06:50:15 +0000 (0:00:01.258) 0:56:53.748 ******* 2026-02-15 06:50:24.862786 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:24.862792 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:24.862797 | orchestrator | 2026-02-15 06:50:24.862803 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:50:24.862808 | orchestrator | Sunday 15 February 2026 06:50:16 +0000 (0:00:01.238) 0:56:54.987 ******* 2026-02-15 06:50:24.862813 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:24.862819 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:24.862824 | orchestrator | 2026-02-15 06:50:24.862830 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:50:24.862835 | orchestrator | Sunday 15 February 2026 06:50:18 +0000 (0:00:01.218) 0:56:56.206 ******* 2026-02-15 06:50:24.862840 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:24.862845 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:24.862851 | orchestrator | 2026-02-15 06:50:24.862856 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:50:24.862861 | orchestrator | Sunday 15 February 2026 06:50:19 +0000 (0:00:01.242) 0:56:57.448 ******* 2026-02-15 06:50:24.862867 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:24.862872 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:24.862877 | orchestrator | 2026-02-15 06:50:24.862883 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:50:24.862888 | orchestrator | Sunday 15 February 2026 06:50:20 +0000 (0:00:01.337) 0:56:58.785 ******* 2026-02-15 06:50:24.862893 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:24.862898 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:24.862904 | orchestrator | 2026-02-15 06:50:24.862909 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:50:24.862914 | orchestrator | Sunday 15 February 2026 06:50:22 +0000 (0:00:01.324) 0:57:00.110 ******* 2026-02-15 06:50:24.862920 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:24.862925 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:24.862975 | orchestrator | 2026-02-15 06:50:24.862981 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:50:24.862987 | orchestrator | Sunday 15 February 2026 06:50:23 +0000 (0:00:01.314) 0:57:01.424 ******* 2026-02-15 06:50:24.862993 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:24.862998 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:24.863003 | orchestrator | 2026-02-15 06:50:24.863009 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:50:24.863014 | orchestrator | Sunday 15 February 2026 06:50:24 +0000 (0:00:01.295) 0:57:02.720 ******* 2026-02-15 06:50:24.863021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:24.863043 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'uuids': ['dcdf938a-1e00-4f8c-ba32-16bd01cbd7b7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z']}})  2026-02-15 06:50:24.863053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ca6afbc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:50:24.863070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9']}})  2026-02-15 06:50:24.863079 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:24.863086 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:24.863093 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:50:24.863100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:24.863112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP', 'dm-uuid-CRYPT-LUKS2-ddc473233b6d4a8581ea0c389df91130-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:50:24.941058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:24.941153 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:24.941175 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'uuids': ['ddc47323-3b6d-4a85-81ea-0c389df91130'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP']}})  2026-02-15 06:50:24.941187 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978']}})  2026-02-15 06:50:24.941196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'uuids': ['d94e5f79-6313-45be-bfeb-6c020052505d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d']}})  2026-02-15 06:50:24.941204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:24.941226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30e735a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:50:24.941241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769']}})  2026-02-15 06:50:24.941257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3b30427', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:50:24.941267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:24.941276 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:24.941290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:26.117033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:26.117168 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z', 'dm-uuid-CRYPT-LUKS2-dcdf938a1e004f8cba3216bd01cbd7b7-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:50:26.117189 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:50:26.117200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:26.117212 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:26.117224 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV', 'dm-uuid-CRYPT-LUKS2-00e62f5af87144e797787951ba7c7c75-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:50:26.117236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:26.117248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'uuids': ['00e62f5a-f871-44e7-9778-7951ba7c7c75'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV']}})  2026-02-15 06:50:26.117304 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55']}})  2026-02-15 06:50:26.117323 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:26.117339 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cdab0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:50:26.117353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:26.117372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:50:26.117391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d', 'dm-uuid-CRYPT-LUKS2-d94e5f79631345bebfeb6c020052505d-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:50:26.373593 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:26.373689 | orchestrator | 2026-02-15 06:50:26.373703 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:50:26.373714 | orchestrator | Sunday 15 February 2026 06:50:26 +0000 (0:00:01.492) 0:57:04.213 ******* 2026-02-15 06:50:26.373742 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.373757 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'uuids': ['dcdf938a-1e00-4f8c-ba32-16bd01cbd7b7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.373768 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ca6afbc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.373780 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.373838 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.373855 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.373866 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.373876 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.373886 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP', 'dm-uuid-CRYPT-LUKS2-ddc473233b6d4a8581ea0c389df91130-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.373903 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.373920 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'uuids': ['ddc47323-3b6d-4a85-81ea-0c389df91130'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.456433 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.456563 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.456591 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'uuids': ['d94e5f79-6313-45be-bfeb-6c020052505d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.456638 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.456661 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30e735a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.456721 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3b30427', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.456747 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.456781 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:26.456820 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.645625 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.645729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z', 'dm-uuid-CRYPT-LUKS2-dcdf938a1e004f8cba3216bd01cbd7b7-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.645745 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.645782 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:27.645797 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.645809 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.645860 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV', 'dm-uuid-CRYPT-LUKS2-00e62f5af87144e797787951ba7c7c75-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.645882 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.645904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'uuids': ['00e62f5a-f871-44e7-9778-7951ba7c7c75'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.645925 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.646165 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:27.646235 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cdab0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:56.072202 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:56.072329 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:56.072344 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d', 'dm-uuid-CRYPT-LUKS2-d94e5f79631345bebfeb6c020052505d-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:50:56.072352 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:56.072361 | orchestrator | 2026-02-15 06:50:56.072369 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:50:56.072377 | orchestrator | Sunday 15 February 2026 06:50:27 +0000 (0:00:01.524) 0:57:05.738 ******* 2026-02-15 06:50:56.072384 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:56.072392 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:56.072399 | orchestrator | 2026-02-15 06:50:56.072406 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:50:56.072412 | orchestrator | Sunday 15 February 2026 06:50:29 +0000 (0:00:01.829) 0:57:07.568 ******* 2026-02-15 06:50:56.072419 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:56.072426 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:56.072433 | orchestrator | 2026-02-15 06:50:56.072440 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:50:56.072460 | orchestrator | Sunday 15 February 2026 06:50:30 +0000 (0:00:01.288) 0:57:08.857 ******* 2026-02-15 06:50:56.072467 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:56.072474 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:56.072481 | orchestrator | 2026-02-15 06:50:56.072488 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:50:56.072495 | orchestrator | Sunday 15 February 2026 06:50:32 +0000 (0:00:01.630) 0:57:10.487 ******* 2026-02-15 06:50:56.072502 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:56.072509 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:56.072516 | orchestrator | 2026-02-15 06:50:56.072523 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:50:56.072530 | orchestrator | Sunday 15 February 2026 06:50:33 +0000 (0:00:01.350) 0:57:11.838 ******* 2026-02-15 06:50:56.072537 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:56.072544 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:56.072551 | orchestrator | 2026-02-15 06:50:56.072558 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:50:56.072565 | orchestrator | Sunday 15 February 2026 06:50:35 +0000 (0:00:01.421) 0:57:13.260 ******* 2026-02-15 06:50:56.072579 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:56.072586 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:56.072593 | orchestrator | 2026-02-15 06:50:56.072600 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:50:56.072607 | orchestrator | Sunday 15 February 2026 06:50:36 +0000 (0:00:01.272) 0:57:14.532 ******* 2026-02-15 06:50:56.072614 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-15 06:50:56.072622 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-15 06:50:56.072628 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-15 06:50:56.072635 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-15 06:50:56.072642 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-15 06:50:56.072649 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-15 06:50:56.072656 | orchestrator | 2026-02-15 06:50:56.072663 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:50:56.072670 | orchestrator | Sunday 15 February 2026 06:50:38 +0000 (0:00:02.196) 0:57:16.729 ******* 2026-02-15 06:50:56.072691 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-15 06:50:56.072698 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-15 06:50:56.072705 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-15 06:50:56.072712 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:56.072719 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 06:50:56.072726 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 06:50:56.072733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 06:50:56.072741 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:56.072748 | orchestrator | 2026-02-15 06:50:56.072756 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:50:56.072764 | orchestrator | Sunday 15 February 2026 06:50:39 +0000 (0:00:01.307) 0:57:18.036 ******* 2026-02-15 06:50:56.072772 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5, testbed-node-3 2026-02-15 06:50:56.072780 | orchestrator | 2026-02-15 06:50:56.072788 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:50:56.072796 | orchestrator | Sunday 15 February 2026 06:50:41 +0000 (0:00:01.251) 0:57:19.288 ******* 2026-02-15 06:50:56.072804 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:56.072812 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:56.072819 | orchestrator | 2026-02-15 06:50:56.072827 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:50:56.072834 | orchestrator | Sunday 15 February 2026 06:50:42 +0000 (0:00:01.257) 0:57:20.545 ******* 2026-02-15 06:50:56.072841 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:56.072849 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:56.072857 | orchestrator | 2026-02-15 06:50:56.072865 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:50:56.072873 | orchestrator | Sunday 15 February 2026 06:50:43 +0000 (0:00:01.433) 0:57:21.979 ******* 2026-02-15 06:50:56.072880 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:56.072888 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:50:56.072895 | orchestrator | 2026-02-15 06:50:56.072902 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:50:56.072910 | orchestrator | Sunday 15 February 2026 06:50:45 +0000 (0:00:01.272) 0:57:23.252 ******* 2026-02-15 06:50:56.072918 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:56.072925 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:56.072932 | orchestrator | 2026-02-15 06:50:56.072939 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:50:56.072946 | orchestrator | Sunday 15 February 2026 06:50:46 +0000 (0:00:01.348) 0:57:24.601 ******* 2026-02-15 06:50:56.072958 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:50:56.072965 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:50:56.072972 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:50:56.072979 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:56.072986 | orchestrator | 2026-02-15 06:50:56.072993 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:50:56.073022 | orchestrator | Sunday 15 February 2026 06:50:48 +0000 (0:00:01.838) 0:57:26.439 ******* 2026-02-15 06:50:56.073030 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:50:56.073037 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:50:56.073044 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:50:56.073051 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:56.073058 | orchestrator | 2026-02-15 06:50:56.073065 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:50:56.073076 | orchestrator | Sunday 15 February 2026 06:50:49 +0000 (0:00:01.399) 0:57:27.838 ******* 2026-02-15 06:50:56.073083 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:50:56.073090 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:50:56.073097 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:50:56.073103 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:50:56.073110 | orchestrator | 2026-02-15 06:50:56.073117 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:50:56.073124 | orchestrator | Sunday 15 February 2026 06:50:51 +0000 (0:00:01.395) 0:57:29.234 ******* 2026-02-15 06:50:56.073131 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:50:56.073138 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:50:56.073145 | orchestrator | 2026-02-15 06:50:56.073152 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:50:56.073159 | orchestrator | Sunday 15 February 2026 06:50:52 +0000 (0:00:01.228) 0:57:30.462 ******* 2026-02-15 06:50:56.073166 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-15 06:50:56.073173 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 06:50:56.073180 | orchestrator | 2026-02-15 06:50:56.073187 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:50:56.073194 | orchestrator | Sunday 15 February 2026 06:50:53 +0000 (0:00:01.463) 0:57:31.926 ******* 2026-02-15 06:50:56.073201 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:50:56.073208 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:50:56.073215 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:50:56.073222 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:50:56.073229 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:50:56.073236 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-15 06:50:56.073248 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:51:41.265305 | orchestrator | 2026-02-15 06:51:41.265414 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:51:41.265431 | orchestrator | Sunday 15 February 2026 06:50:56 +0000 (0:00:02.232) 0:57:34.158 ******* 2026-02-15 06:51:41.265442 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:51:41.265453 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:51:41.265463 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:51:41.265473 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 06:51:41.265506 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:51:41.265517 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-15 06:51:41.265528 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:51:41.265537 | orchestrator | 2026-02-15 06:51:41.265548 | orchestrator | TASK [Prevent restarts from the packaging] ************************************* 2026-02-15 06:51:41.265557 | orchestrator | Sunday 15 February 2026 06:50:58 +0000 (0:00:02.686) 0:57:36.845 ******* 2026-02-15 06:51:41.265567 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.265578 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.265588 | orchestrator | 2026-02-15 06:51:41.265598 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:51:41.265608 | orchestrator | Sunday 15 February 2026 06:51:00 +0000 (0:00:01.323) 0:57:38.168 ******* 2026-02-15 06:51:41.265617 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5, testbed-node-3 2026-02-15 06:51:41.265627 | orchestrator | 2026-02-15 06:51:41.265637 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:51:41.265647 | orchestrator | Sunday 15 February 2026 06:51:01 +0000 (0:00:01.589) 0:57:39.758 ******* 2026-02-15 06:51:41.265656 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5, testbed-node-3 2026-02-15 06:51:41.265666 | orchestrator | 2026-02-15 06:51:41.265676 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:51:41.265685 | orchestrator | Sunday 15 February 2026 06:51:02 +0000 (0:00:01.268) 0:57:41.026 ******* 2026-02-15 06:51:41.265695 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.265705 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.265715 | orchestrator | 2026-02-15 06:51:41.265724 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:51:41.265734 | orchestrator | Sunday 15 February 2026 06:51:04 +0000 (0:00:01.261) 0:57:42.287 ******* 2026-02-15 06:51:41.265743 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:51:41.265754 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:51:41.265764 | orchestrator | 2026-02-15 06:51:41.265774 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:51:41.265783 | orchestrator | Sunday 15 February 2026 06:51:05 +0000 (0:00:01.694) 0:57:43.982 ******* 2026-02-15 06:51:41.265793 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:51:41.265803 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:51:41.265812 | orchestrator | 2026-02-15 06:51:41.265822 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:51:41.265831 | orchestrator | Sunday 15 February 2026 06:51:07 +0000 (0:00:01.648) 0:57:45.631 ******* 2026-02-15 06:51:41.265841 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:51:41.265852 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:51:41.265863 | orchestrator | 2026-02-15 06:51:41.265889 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:51:41.265900 | orchestrator | Sunday 15 February 2026 06:51:09 +0000 (0:00:01.635) 0:57:47.266 ******* 2026-02-15 06:51:41.265911 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.265922 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.265933 | orchestrator | 2026-02-15 06:51:41.265944 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:51:41.265955 | orchestrator | Sunday 15 February 2026 06:51:10 +0000 (0:00:01.256) 0:57:48.523 ******* 2026-02-15 06:51:41.265967 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.265978 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.265989 | orchestrator | 2026-02-15 06:51:41.265999 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:51:41.266010 | orchestrator | Sunday 15 February 2026 06:51:11 +0000 (0:00:01.252) 0:57:49.775 ******* 2026-02-15 06:51:41.266077 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266096 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.266129 | orchestrator | 2026-02-15 06:51:41.266141 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:51:41.266152 | orchestrator | Sunday 15 February 2026 06:51:12 +0000 (0:00:01.268) 0:57:51.044 ******* 2026-02-15 06:51:41.266163 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:51:41.266175 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:51:41.266186 | orchestrator | 2026-02-15 06:51:41.266199 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:51:41.266209 | orchestrator | Sunday 15 February 2026 06:51:14 +0000 (0:00:01.681) 0:57:52.726 ******* 2026-02-15 06:51:41.266219 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:51:41.266229 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:51:41.266238 | orchestrator | 2026-02-15 06:51:41.266248 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:51:41.266258 | orchestrator | Sunday 15 February 2026 06:51:16 +0000 (0:00:01.737) 0:57:54.464 ******* 2026-02-15 06:51:41.266267 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266277 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.266286 | orchestrator | 2026-02-15 06:51:41.266296 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:51:41.266306 | orchestrator | Sunday 15 February 2026 06:51:17 +0000 (0:00:01.286) 0:57:55.750 ******* 2026-02-15 06:51:41.266316 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266342 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.266352 | orchestrator | 2026-02-15 06:51:41.266362 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:51:41.266371 | orchestrator | Sunday 15 February 2026 06:51:18 +0000 (0:00:01.249) 0:57:56.999 ******* 2026-02-15 06:51:41.266381 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:51:41.266391 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:51:41.266400 | orchestrator | 2026-02-15 06:51:41.266410 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:51:41.266420 | orchestrator | Sunday 15 February 2026 06:51:20 +0000 (0:00:01.272) 0:57:58.272 ******* 2026-02-15 06:51:41.266429 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:51:41.266439 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:51:41.266449 | orchestrator | 2026-02-15 06:51:41.266458 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:51:41.266468 | orchestrator | Sunday 15 February 2026 06:51:21 +0000 (0:00:01.332) 0:57:59.604 ******* 2026-02-15 06:51:41.266478 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:51:41.266487 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:51:41.266497 | orchestrator | 2026-02-15 06:51:41.266506 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:51:41.266516 | orchestrator | Sunday 15 February 2026 06:51:22 +0000 (0:00:01.234) 0:58:00.838 ******* 2026-02-15 06:51:41.266526 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266536 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.266545 | orchestrator | 2026-02-15 06:51:41.266555 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:51:41.266564 | orchestrator | Sunday 15 February 2026 06:51:24 +0000 (0:00:01.275) 0:58:02.114 ******* 2026-02-15 06:51:41.266574 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266584 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.266593 | orchestrator | 2026-02-15 06:51:41.266603 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:51:41.266613 | orchestrator | Sunday 15 February 2026 06:51:25 +0000 (0:00:01.246) 0:58:03.360 ******* 2026-02-15 06:51:41.266622 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266632 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.266642 | orchestrator | 2026-02-15 06:51:41.266651 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:51:41.266661 | orchestrator | Sunday 15 February 2026 06:51:26 +0000 (0:00:01.611) 0:58:04.972 ******* 2026-02-15 06:51:41.266677 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:51:41.266687 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:51:41.266697 | orchestrator | 2026-02-15 06:51:41.266706 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:51:41.266716 | orchestrator | Sunday 15 February 2026 06:51:28 +0000 (0:00:01.341) 0:58:06.313 ******* 2026-02-15 06:51:41.266726 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:51:41.266736 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:51:41.266745 | orchestrator | 2026-02-15 06:51:41.266755 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:51:41.266765 | orchestrator | Sunday 15 February 2026 06:51:29 +0000 (0:00:01.267) 0:58:07.581 ******* 2026-02-15 06:51:41.266774 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266784 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.266794 | orchestrator | 2026-02-15 06:51:41.266803 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:51:41.266813 | orchestrator | Sunday 15 February 2026 06:51:30 +0000 (0:00:01.275) 0:58:08.857 ******* 2026-02-15 06:51:41.266822 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266832 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.266841 | orchestrator | 2026-02-15 06:51:41.266862 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:51:41.266872 | orchestrator | Sunday 15 February 2026 06:51:32 +0000 (0:00:01.267) 0:58:10.125 ******* 2026-02-15 06:51:41.266887 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266897 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.266907 | orchestrator | 2026-02-15 06:51:41.266916 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:51:41.266926 | orchestrator | Sunday 15 February 2026 06:51:33 +0000 (0:00:01.383) 0:58:11.508 ******* 2026-02-15 06:51:41.266936 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266945 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.266955 | orchestrator | 2026-02-15 06:51:41.266964 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:51:41.266974 | orchestrator | Sunday 15 February 2026 06:51:34 +0000 (0:00:01.471) 0:58:12.980 ******* 2026-02-15 06:51:41.266983 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.266993 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.267002 | orchestrator | 2026-02-15 06:51:41.267012 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:51:41.267022 | orchestrator | Sunday 15 February 2026 06:51:36 +0000 (0:00:01.294) 0:58:14.275 ******* 2026-02-15 06:51:41.267031 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.267041 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.267051 | orchestrator | 2026-02-15 06:51:41.267060 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:51:41.267070 | orchestrator | Sunday 15 February 2026 06:51:37 +0000 (0:00:01.313) 0:58:15.588 ******* 2026-02-15 06:51:41.267079 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.267089 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.267099 | orchestrator | 2026-02-15 06:51:41.267127 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:51:41.267137 | orchestrator | Sunday 15 February 2026 06:51:38 +0000 (0:00:01.209) 0:58:16.798 ******* 2026-02-15 06:51:41.267147 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.267156 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.267166 | orchestrator | 2026-02-15 06:51:41.267176 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:51:41.267185 | orchestrator | Sunday 15 February 2026 06:51:39 +0000 (0:00:01.234) 0:58:18.032 ******* 2026-02-15 06:51:41.267195 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:51:41.267205 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:51:41.267214 | orchestrator | 2026-02-15 06:51:41.267230 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:52:27.784185 | orchestrator | Sunday 15 February 2026 06:51:41 +0000 (0:00:01.317) 0:58:19.350 ******* 2026-02-15 06:52:27.784300 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784307 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784312 | orchestrator | 2026-02-15 06:52:27.784316 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:52:27.784321 | orchestrator | Sunday 15 February 2026 06:51:42 +0000 (0:00:01.470) 0:58:20.820 ******* 2026-02-15 06:52:27.784325 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784329 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784333 | orchestrator | 2026-02-15 06:52:27.784337 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:52:27.784342 | orchestrator | Sunday 15 February 2026 06:51:44 +0000 (0:00:01.297) 0:58:22.118 ******* 2026-02-15 06:52:27.784345 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784349 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784353 | orchestrator | 2026-02-15 06:52:27.784357 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:52:27.784361 | orchestrator | Sunday 15 February 2026 06:51:45 +0000 (0:00:01.243) 0:58:23.361 ******* 2026-02-15 06:52:27.784365 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:52:27.784369 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:52:27.784373 | orchestrator | 2026-02-15 06:52:27.784377 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:52:27.784381 | orchestrator | Sunday 15 February 2026 06:51:47 +0000 (0:00:02.115) 0:58:25.477 ******* 2026-02-15 06:52:27.784384 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:52:27.784388 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:52:27.784392 | orchestrator | 2026-02-15 06:52:27.784396 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:52:27.784400 | orchestrator | Sunday 15 February 2026 06:51:49 +0000 (0:00:02.440) 0:58:27.917 ******* 2026-02-15 06:52:27.784404 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5, testbed-node-3 2026-02-15 06:52:27.784408 | orchestrator | 2026-02-15 06:52:27.784412 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:52:27.784416 | orchestrator | Sunday 15 February 2026 06:51:51 +0000 (0:00:01.465) 0:58:29.383 ******* 2026-02-15 06:52:27.784419 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784423 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784427 | orchestrator | 2026-02-15 06:52:27.784431 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:52:27.784435 | orchestrator | Sunday 15 February 2026 06:51:52 +0000 (0:00:01.232) 0:58:30.616 ******* 2026-02-15 06:52:27.784439 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784443 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784447 | orchestrator | 2026-02-15 06:52:27.784450 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:52:27.784454 | orchestrator | Sunday 15 February 2026 06:51:53 +0000 (0:00:01.314) 0:58:31.930 ******* 2026-02-15 06:52:27.784458 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:52:27.784462 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:52:27.784466 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:52:27.784470 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:52:27.784474 | orchestrator | 2026-02-15 06:52:27.784477 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:52:27.784492 | orchestrator | Sunday 15 February 2026 06:51:55 +0000 (0:00:02.031) 0:58:33.962 ******* 2026-02-15 06:52:27.784496 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:52:27.784500 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:52:27.784504 | orchestrator | 2026-02-15 06:52:27.784508 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:52:27.784525 | orchestrator | Sunday 15 February 2026 06:51:57 +0000 (0:00:01.606) 0:58:35.568 ******* 2026-02-15 06:52:27.784529 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784533 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784537 | orchestrator | 2026-02-15 06:52:27.784540 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:52:27.784544 | orchestrator | Sunday 15 February 2026 06:51:58 +0000 (0:00:01.329) 0:58:36.897 ******* 2026-02-15 06:52:27.784548 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784552 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784555 | orchestrator | 2026-02-15 06:52:27.784559 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:52:27.784563 | orchestrator | Sunday 15 February 2026 06:52:00 +0000 (0:00:01.382) 0:58:38.280 ******* 2026-02-15 06:52:27.784567 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784571 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784574 | orchestrator | 2026-02-15 06:52:27.784578 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:52:27.784582 | orchestrator | Sunday 15 February 2026 06:52:01 +0000 (0:00:01.230) 0:58:39.510 ******* 2026-02-15 06:52:27.784586 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5, testbed-node-3 2026-02-15 06:52:27.784590 | orchestrator | 2026-02-15 06:52:27.784594 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:52:27.784598 | orchestrator | Sunday 15 February 2026 06:52:02 +0000 (0:00:01.255) 0:58:40.766 ******* 2026-02-15 06:52:27.784602 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:52:27.784605 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:52:27.784609 | orchestrator | 2026-02-15 06:52:27.784613 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:52:27.784617 | orchestrator | Sunday 15 February 2026 06:52:05 +0000 (0:00:02.869) 0:58:43.636 ******* 2026-02-15 06:52:27.784621 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:52:27.784634 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:52:27.784638 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:52:27.784642 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784646 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:52:27.784649 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:52:27.784653 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:52:27.784657 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784661 | orchestrator | 2026-02-15 06:52:27.784664 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:52:27.784668 | orchestrator | Sunday 15 February 2026 06:52:06 +0000 (0:00:01.297) 0:58:44.933 ******* 2026-02-15 06:52:27.784672 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784676 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784679 | orchestrator | 2026-02-15 06:52:27.784683 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:52:27.784687 | orchestrator | Sunday 15 February 2026 06:52:08 +0000 (0:00:01.306) 0:58:46.240 ******* 2026-02-15 06:52:27.784691 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784694 | orchestrator | 2026-02-15 06:52:27.784698 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:52:27.784702 | orchestrator | Sunday 15 February 2026 06:52:09 +0000 (0:00:01.221) 0:58:47.461 ******* 2026-02-15 06:52:27.784705 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784709 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784713 | orchestrator | 2026-02-15 06:52:27.784717 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:52:27.784724 | orchestrator | Sunday 15 February 2026 06:52:10 +0000 (0:00:01.279) 0:58:48.741 ******* 2026-02-15 06:52:27.784728 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784731 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784735 | orchestrator | 2026-02-15 06:52:27.784739 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:52:27.784742 | orchestrator | Sunday 15 February 2026 06:52:11 +0000 (0:00:01.258) 0:58:50.000 ******* 2026-02-15 06:52:27.784746 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784750 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784753 | orchestrator | 2026-02-15 06:52:27.784758 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:52:27.784762 | orchestrator | Sunday 15 February 2026 06:52:13 +0000 (0:00:01.268) 0:58:51.269 ******* 2026-02-15 06:52:27.784767 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:52:27.784771 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:52:27.784775 | orchestrator | 2026-02-15 06:52:27.784780 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:52:27.784784 | orchestrator | Sunday 15 February 2026 06:52:15 +0000 (0:00:02.636) 0:58:53.905 ******* 2026-02-15 06:52:27.784788 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:52:27.784792 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:52:27.784797 | orchestrator | 2026-02-15 06:52:27.784801 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:52:27.784805 | orchestrator | Sunday 15 February 2026 06:52:17 +0000 (0:00:01.239) 0:58:55.145 ******* 2026-02-15 06:52:27.784809 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5, testbed-node-3 2026-02-15 06:52:27.784814 | orchestrator | 2026-02-15 06:52:27.784818 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:52:27.784822 | orchestrator | Sunday 15 February 2026 06:52:18 +0000 (0:00:01.511) 0:58:56.656 ******* 2026-02-15 06:52:27.784829 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784833 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784837 | orchestrator | 2026-02-15 06:52:27.784842 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:52:27.784846 | orchestrator | Sunday 15 February 2026 06:52:19 +0000 (0:00:01.281) 0:58:57.937 ******* 2026-02-15 06:52:27.784850 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784854 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784859 | orchestrator | 2026-02-15 06:52:27.784863 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:52:27.784867 | orchestrator | Sunday 15 February 2026 06:52:21 +0000 (0:00:01.246) 0:58:59.184 ******* 2026-02-15 06:52:27.784872 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784876 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784880 | orchestrator | 2026-02-15 06:52:27.784884 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:52:27.784888 | orchestrator | Sunday 15 February 2026 06:52:22 +0000 (0:00:01.288) 0:59:00.473 ******* 2026-02-15 06:52:27.784893 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784897 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784902 | orchestrator | 2026-02-15 06:52:27.784906 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:52:27.784910 | orchestrator | Sunday 15 February 2026 06:52:23 +0000 (0:00:01.268) 0:59:01.742 ******* 2026-02-15 06:52:27.784914 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784919 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784923 | orchestrator | 2026-02-15 06:52:27.784927 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:52:27.784931 | orchestrator | Sunday 15 February 2026 06:52:24 +0000 (0:00:01.266) 0:59:03.008 ******* 2026-02-15 06:52:27.784936 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784940 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784947 | orchestrator | 2026-02-15 06:52:27.784952 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:52:27.784956 | orchestrator | Sunday 15 February 2026 06:52:26 +0000 (0:00:01.206) 0:59:04.215 ******* 2026-02-15 06:52:27.784960 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:52:27.784965 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:52:27.784969 | orchestrator | 2026-02-15 06:52:27.784976 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:53:09.214431 | orchestrator | Sunday 15 February 2026 06:52:27 +0000 (0:00:01.656) 0:59:05.872 ******* 2026-02-15 06:53:09.214544 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.214561 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.214573 | orchestrator | 2026-02-15 06:53:09.214586 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:53:09.214597 | orchestrator | Sunday 15 February 2026 06:52:29 +0000 (0:00:01.271) 0:59:07.144 ******* 2026-02-15 06:53:09.214609 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:53:09.214621 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:53:09.214632 | orchestrator | 2026-02-15 06:53:09.214643 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:53:09.214655 | orchestrator | Sunday 15 February 2026 06:52:30 +0000 (0:00:01.239) 0:59:08.383 ******* 2026-02-15 06:53:09.214667 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5, testbed-node-3 2026-02-15 06:53:09.214678 | orchestrator | 2026-02-15 06:53:09.214689 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:53:09.214701 | orchestrator | Sunday 15 February 2026 06:52:31 +0000 (0:00:01.224) 0:59:09.608 ******* 2026-02-15 06:53:09.214712 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-15 06:53:09.214723 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-15 06:53:09.214735 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-15 06:53:09.214746 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-15 06:53:09.214757 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-15 06:53:09.214768 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-15 06:53:09.214779 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-15 06:53:09.214790 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-15 06:53:09.214801 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-15 06:53:09.214812 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-15 06:53:09.214823 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-15 06:53:09.214835 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-15 06:53:09.214846 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-15 06:53:09.214857 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-15 06:53:09.214868 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:53:09.214879 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:53:09.214890 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:53:09.214902 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:53:09.214913 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:53:09.214926 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:53:09.214940 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:53:09.214953 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:53:09.214966 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:53:09.214979 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:53:09.214992 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:53:09.215031 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:53:09.215059 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:53:09.215073 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:53:09.215086 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-15 06:53:09.215099 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-15 06:53:09.215112 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-15 06:53:09.215124 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-15 06:53:09.215137 | orchestrator | 2026-02-15 06:53:09.215150 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:53:09.215163 | orchestrator | Sunday 15 February 2026 06:52:38 +0000 (0:00:06.769) 0:59:16.378 ******* 2026-02-15 06:53:09.215177 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5, testbed-node-3 2026-02-15 06:53:09.215190 | orchestrator | 2026-02-15 06:53:09.215203 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-15 06:53:09.215216 | orchestrator | Sunday 15 February 2026 06:52:39 +0000 (0:00:01.260) 0:59:17.638 ******* 2026-02-15 06:53:09.215229 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 06:53:09.215244 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:53:09.215258 | orchestrator | 2026-02-15 06:53:09.215271 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-15 06:53:09.215305 | orchestrator | Sunday 15 February 2026 06:52:41 +0000 (0:00:01.680) 0:59:19.318 ******* 2026-02-15 06:53:09.215317 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 06:53:09.215327 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:53:09.215338 | orchestrator | 2026-02-15 06:53:09.215349 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:53:09.215377 | orchestrator | Sunday 15 February 2026 06:52:43 +0000 (0:00:02.114) 0:59:21.433 ******* 2026-02-15 06:53:09.215389 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215400 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215410 | orchestrator | 2026-02-15 06:53:09.215421 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:53:09.215432 | orchestrator | Sunday 15 February 2026 06:52:44 +0000 (0:00:01.271) 0:59:22.704 ******* 2026-02-15 06:53:09.215443 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215453 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215464 | orchestrator | 2026-02-15 06:53:09.215475 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:53:09.215485 | orchestrator | Sunday 15 February 2026 06:52:45 +0000 (0:00:01.318) 0:59:24.023 ******* 2026-02-15 06:53:09.215496 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215507 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215519 | orchestrator | 2026-02-15 06:53:09.215529 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:53:09.215540 | orchestrator | Sunday 15 February 2026 06:52:47 +0000 (0:00:01.630) 0:59:25.653 ******* 2026-02-15 06:53:09.215551 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215562 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215572 | orchestrator | 2026-02-15 06:53:09.215583 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:53:09.215593 | orchestrator | Sunday 15 February 2026 06:52:48 +0000 (0:00:01.218) 0:59:26.872 ******* 2026-02-15 06:53:09.215604 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215615 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215635 | orchestrator | 2026-02-15 06:53:09.215646 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:53:09.215657 | orchestrator | Sunday 15 February 2026 06:52:50 +0000 (0:00:01.257) 0:59:28.129 ******* 2026-02-15 06:53:09.215668 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215679 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215689 | orchestrator | 2026-02-15 06:53:09.215700 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:53:09.215711 | orchestrator | Sunday 15 February 2026 06:52:51 +0000 (0:00:01.292) 0:59:29.422 ******* 2026-02-15 06:53:09.215722 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215732 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215743 | orchestrator | 2026-02-15 06:53:09.215754 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:53:09.215764 | orchestrator | Sunday 15 February 2026 06:52:52 +0000 (0:00:01.291) 0:59:30.714 ******* 2026-02-15 06:53:09.215775 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215786 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215796 | orchestrator | 2026-02-15 06:53:09.215807 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:53:09.215818 | orchestrator | Sunday 15 February 2026 06:52:53 +0000 (0:00:01.334) 0:59:32.048 ******* 2026-02-15 06:53:09.215828 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215839 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215850 | orchestrator | 2026-02-15 06:53:09.215860 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:53:09.215871 | orchestrator | Sunday 15 February 2026 06:52:55 +0000 (0:00:01.292) 0:59:33.341 ******* 2026-02-15 06:53:09.215881 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215892 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215903 | orchestrator | 2026-02-15 06:53:09.215914 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:53:09.215930 | orchestrator | Sunday 15 February 2026 06:52:56 +0000 (0:00:01.647) 0:59:34.988 ******* 2026-02-15 06:53:09.215941 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:09.215952 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:09.215962 | orchestrator | 2026-02-15 06:53:09.215973 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:53:09.215983 | orchestrator | Sunday 15 February 2026 06:52:58 +0000 (0:00:01.245) 0:59:36.234 ******* 2026-02-15 06:53:09.215994 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-15 06:53:09.216005 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-15 06:53:09.216015 | orchestrator | 2026-02-15 06:53:09.216026 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:53:09.216037 | orchestrator | Sunday 15 February 2026 06:53:02 +0000 (0:00:04.645) 0:59:40.879 ******* 2026-02-15 06:53:09.216047 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 06:53:09.216058 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:53:09.216069 | orchestrator | 2026-02-15 06:53:09.216079 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:53:09.216090 | orchestrator | Sunday 15 February 2026 06:53:04 +0000 (0:00:01.311) 0:59:42.191 ******* 2026-02-15 06:53:09.216103 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-15 06:53:09.216130 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-15 06:53:58.101752 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-15 06:53:58.101869 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-15 06:53:58.101887 | orchestrator | 2026-02-15 06:53:58.101902 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:53:58.101914 | orchestrator | Sunday 15 February 2026 06:53:09 +0000 (0:00:05.114) 0:59:47.306 ******* 2026-02-15 06:53:58.101925 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.101937 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:58.101948 | orchestrator | 2026-02-15 06:53:58.101959 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:53:58.101970 | orchestrator | Sunday 15 February 2026 06:53:10 +0000 (0:00:01.226) 0:59:48.533 ******* 2026-02-15 06:53:58.101981 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.101992 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:58.102003 | orchestrator | 2026-02-15 06:53:58.102014 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:53:58.102095 | orchestrator | Sunday 15 February 2026 06:53:11 +0000 (0:00:01.378) 0:59:49.911 ******* 2026-02-15 06:53:58.102107 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.102118 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:58.102129 | orchestrator | 2026-02-15 06:53:58.102140 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:53:58.102151 | orchestrator | Sunday 15 February 2026 06:53:13 +0000 (0:00:01.323) 0:59:51.234 ******* 2026-02-15 06:53:58.102161 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.102172 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:58.102183 | orchestrator | 2026-02-15 06:53:58.102194 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:53:58.102205 | orchestrator | Sunday 15 February 2026 06:53:14 +0000 (0:00:01.318) 0:59:52.553 ******* 2026-02-15 06:53:58.102216 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.102227 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:58.102238 | orchestrator | 2026-02-15 06:53:58.102249 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:53:58.102260 | orchestrator | Sunday 15 February 2026 06:53:15 +0000 (0:00:01.289) 0:59:53.842 ******* 2026-02-15 06:53:58.102274 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:53:58.102288 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:53:58.102300 | orchestrator | 2026-02-15 06:53:58.102313 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:53:58.102325 | orchestrator | Sunday 15 February 2026 06:53:17 +0000 (0:00:01.367) 0:59:55.210 ******* 2026-02-15 06:53:58.102353 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:53:58.102390 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:53:58.102404 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:53:58.102416 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.102429 | orchestrator | 2026-02-15 06:53:58.102442 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:53:58.102478 | orchestrator | Sunday 15 February 2026 06:53:18 +0000 (0:00:01.423) 0:59:56.634 ******* 2026-02-15 06:53:58.102491 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:53:58.102503 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:53:58.102515 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:53:58.102527 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.102539 | orchestrator | 2026-02-15 06:53:58.102552 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:53:58.102565 | orchestrator | Sunday 15 February 2026 06:53:19 +0000 (0:00:01.408) 0:59:58.042 ******* 2026-02-15 06:53:58.102578 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 06:53:58.102591 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 06:53:58.102604 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 06:53:58.102616 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.102628 | orchestrator | 2026-02-15 06:53:58.102639 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:53:58.102650 | orchestrator | Sunday 15 February 2026 06:53:21 +0000 (0:00:01.824) 0:59:59.867 ******* 2026-02-15 06:53:58.102661 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:53:58.102672 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:53:58.102682 | orchestrator | 2026-02-15 06:53:58.102693 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:53:58.102704 | orchestrator | Sunday 15 February 2026 06:53:23 +0000 (0:00:01.370) 1:00:01.237 ******* 2026-02-15 06:53:58.102715 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-15 06:53:58.102726 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 06:53:58.102736 | orchestrator | 2026-02-15 06:53:58.102747 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:53:58.102758 | orchestrator | Sunday 15 February 2026 06:53:24 +0000 (0:00:01.496) 1:00:02.734 ******* 2026-02-15 06:53:58.102768 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:53:58.102779 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:53:58.102790 | orchestrator | 2026-02-15 06:53:58.102819 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-02-15 06:53:58.102831 | orchestrator | Sunday 15 February 2026 06:53:26 +0000 (0:00:01.897) 1:00:04.632 ******* 2026-02-15 06:53:58.102842 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.102852 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:58.102863 | orchestrator | 2026-02-15 06:53:58.102874 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-02-15 06:53:58.102885 | orchestrator | Sunday 15 February 2026 06:53:27 +0000 (0:00:01.306) 1:00:05.938 ******* 2026-02-15 06:53:58.102896 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-5, testbed-node-3 2026-02-15 06:53:58.102907 | orchestrator | 2026-02-15 06:53:58.102918 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-02-15 06:53:58.102928 | orchestrator | Sunday 15 February 2026 06:53:29 +0000 (0:00:01.488) 1:00:07.427 ******* 2026-02-15 06:53:58.102939 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-15 06:53:58.102950 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-02-15 06:53:58.102960 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-02-15 06:53:58.102971 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-02-15 06:53:58.102982 | orchestrator | 2026-02-15 06:53:58.102993 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-02-15 06:53:58.103003 | orchestrator | Sunday 15 February 2026 06:53:31 +0000 (0:00:01.967) 1:00:09.394 ******* 2026-02-15 06:53:58.103014 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:53:58.103025 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-15 06:53:58.103045 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 06:53:58.103056 | orchestrator | 2026-02-15 06:53:58.103067 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-02-15 06:53:58.103077 | orchestrator | Sunday 15 February 2026 06:53:34 +0000 (0:00:03.283) 1:00:12.677 ******* 2026-02-15 06:53:58.103088 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-15 06:53:58.103099 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-15 06:53:58.103110 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:53:58.103120 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-15 06:53:58.103131 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 06:53:58.103142 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:53:58.103152 | orchestrator | 2026-02-15 06:53:58.103163 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-02-15 06:53:58.103173 | orchestrator | Sunday 15 February 2026 06:53:36 +0000 (0:00:02.235) 1:00:14.913 ******* 2026-02-15 06:53:58.103184 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:53:58.103195 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:53:58.103206 | orchestrator | 2026-02-15 06:53:58.103216 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-02-15 06:53:58.103227 | orchestrator | Sunday 15 February 2026 06:53:38 +0000 (0:00:01.686) 1:00:16.599 ******* 2026-02-15 06:53:58.103238 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.103249 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:53:58.103259 | orchestrator | 2026-02-15 06:53:58.103270 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-02-15 06:53:58.103281 | orchestrator | Sunday 15 February 2026 06:53:39 +0000 (0:00:01.263) 1:00:17.863 ******* 2026-02-15 06:53:58.103297 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-5, testbed-node-3 2026-02-15 06:53:58.103309 | orchestrator | 2026-02-15 06:53:58.103320 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-02-15 06:53:58.103330 | orchestrator | Sunday 15 February 2026 06:53:41 +0000 (0:00:01.592) 1:00:19.456 ******* 2026-02-15 06:53:58.103341 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-5, testbed-node-3 2026-02-15 06:53:58.103352 | orchestrator | 2026-02-15 06:53:58.103394 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-02-15 06:53:58.103406 | orchestrator | Sunday 15 February 2026 06:53:42 +0000 (0:00:01.241) 1:00:20.697 ******* 2026-02-15 06:53:58.103417 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:53:58.103428 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:53:58.103438 | orchestrator | 2026-02-15 06:53:58.103449 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-02-15 06:53:58.103460 | orchestrator | Sunday 15 February 2026 06:53:44 +0000 (0:00:02.277) 1:00:22.975 ******* 2026-02-15 06:53:58.103471 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:53:58.103481 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:53:58.103492 | orchestrator | 2026-02-15 06:53:58.103503 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-02-15 06:53:58.103514 | orchestrator | Sunday 15 February 2026 06:53:46 +0000 (0:00:02.049) 1:00:25.024 ******* 2026-02-15 06:53:58.103525 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:53:58.103535 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:53:58.103546 | orchestrator | 2026-02-15 06:53:58.103557 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-02-15 06:53:58.103568 | orchestrator | Sunday 15 February 2026 06:53:49 +0000 (0:00:02.414) 1:00:27.439 ******* 2026-02-15 06:53:58.103578 | orchestrator | changed: [testbed-node-5] 2026-02-15 06:53:58.103589 | orchestrator | changed: [testbed-node-3] 2026-02-15 06:53:58.103600 | orchestrator | 2026-02-15 06:53:58.103611 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-02-15 06:53:58.103622 | orchestrator | Sunday 15 February 2026 06:53:52 +0000 (0:00:03.522) 1:00:30.961 ******* 2026-02-15 06:53:58.103643 | orchestrator | ok: [testbed-node-5] 2026-02-15 06:53:58.103654 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:53:58.103665 | orchestrator | 2026-02-15 06:53:58.103676 | orchestrator | TASK [Set max_mds] ************************************************************* 2026-02-15 06:53:58.103686 | orchestrator | Sunday 15 February 2026 06:53:54 +0000 (0:00:01.758) 1:00:32.720 ******* 2026-02-15 06:53:58.103697 | orchestrator | skipping: [testbed-node-5] 2026-02-15 06:53:58.103716 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:54:20.987298 | orchestrator | 2026-02-15 06:54:20.987459 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-15 06:54:20.987479 | orchestrator | 2026-02-15 06:54:20.987491 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:54:20.987502 | orchestrator | Sunday 15 February 2026 06:53:58 +0000 (0:00:03.467) 1:00:36.188 ******* 2026-02-15 06:54:20.987514 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3 2026-02-15 06:54:20.987524 | orchestrator | 2026-02-15 06:54:20.987535 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:54:20.987546 | orchestrator | Sunday 15 February 2026 06:53:59 +0000 (0:00:01.131) 1:00:37.319 ******* 2026-02-15 06:54:20.987557 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:20.987569 | orchestrator | 2026-02-15 06:54:20.987579 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:54:20.987590 | orchestrator | Sunday 15 February 2026 06:54:00 +0000 (0:00:01.496) 1:00:38.815 ******* 2026-02-15 06:54:20.987600 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:20.987611 | orchestrator | 2026-02-15 06:54:20.987622 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:54:20.987633 | orchestrator | Sunday 15 February 2026 06:54:01 +0000 (0:00:01.162) 1:00:39.978 ******* 2026-02-15 06:54:20.987644 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:20.987655 | orchestrator | 2026-02-15 06:54:20.987666 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:54:20.987677 | orchestrator | Sunday 15 February 2026 06:54:03 +0000 (0:00:01.481) 1:00:41.459 ******* 2026-02-15 06:54:20.987688 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:20.987699 | orchestrator | 2026-02-15 06:54:20.987709 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:54:20.987720 | orchestrator | Sunday 15 February 2026 06:54:04 +0000 (0:00:01.270) 1:00:42.730 ******* 2026-02-15 06:54:20.987731 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:20.987741 | orchestrator | 2026-02-15 06:54:20.987752 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:54:20.987763 | orchestrator | Sunday 15 February 2026 06:54:05 +0000 (0:00:01.214) 1:00:43.944 ******* 2026-02-15 06:54:20.987773 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:20.987784 | orchestrator | 2026-02-15 06:54:20.987795 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:54:20.987806 | orchestrator | Sunday 15 February 2026 06:54:07 +0000 (0:00:01.161) 1:00:45.106 ******* 2026-02-15 06:54:20.987817 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:20.987828 | orchestrator | 2026-02-15 06:54:20.987839 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:54:20.987850 | orchestrator | Sunday 15 February 2026 06:54:08 +0000 (0:00:01.140) 1:00:46.247 ******* 2026-02-15 06:54:20.987860 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:20.987871 | orchestrator | 2026-02-15 06:54:20.987881 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:54:20.987892 | orchestrator | Sunday 15 February 2026 06:54:09 +0000 (0:00:01.138) 1:00:47.385 ******* 2026-02-15 06:54:20.987903 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:54:20.987914 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:54:20.987924 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:54:20.987959 | orchestrator | 2026-02-15 06:54:20.987985 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:54:20.987996 | orchestrator | Sunday 15 February 2026 06:54:11 +0000 (0:00:01.749) 1:00:49.135 ******* 2026-02-15 06:54:20.988007 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:20.988017 | orchestrator | 2026-02-15 06:54:20.988028 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:54:20.988040 | orchestrator | Sunday 15 February 2026 06:54:12 +0000 (0:00:01.257) 1:00:50.393 ******* 2026-02-15 06:54:20.988050 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:54:20.988061 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:54:20.988072 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:54:20.988082 | orchestrator | 2026-02-15 06:54:20.988093 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:54:20.988103 | orchestrator | Sunday 15 February 2026 06:54:15 +0000 (0:00:02.848) 1:00:53.242 ******* 2026-02-15 06:54:20.988114 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 06:54:20.988125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 06:54:20.988136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 06:54:20.988147 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:20.988157 | orchestrator | 2026-02-15 06:54:20.988168 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:54:20.988179 | orchestrator | Sunday 15 February 2026 06:54:16 +0000 (0:00:01.434) 1:00:54.677 ******* 2026-02-15 06:54:20.988191 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:54:20.988204 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:54:20.988232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:54:20.988244 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:20.988255 | orchestrator | 2026-02-15 06:54:20.988266 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:54:20.988276 | orchestrator | Sunday 15 February 2026 06:54:18 +0000 (0:00:02.021) 1:00:56.698 ******* 2026-02-15 06:54:20.988289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:20.988302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:20.988313 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:20.988333 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:20.988344 | orchestrator | 2026-02-15 06:54:20.988355 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:54:20.988365 | orchestrator | Sunday 15 February 2026 06:54:19 +0000 (0:00:01.170) 1:00:57.869 ******* 2026-02-15 06:54:20.988384 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:54:12.797001', 'end': '2026-02-15 06:54:12.842355', 'delta': '0:00:00.045354', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:54:20.988398 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:54:13.369744', 'end': '2026-02-15 06:54:13.418464', 'delta': '0:00:00.048720', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:54:20.988429 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:54:13.926152', 'end': '2026-02-15 06:54:13.977354', 'delta': '0:00:00.051202', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:54:20.988441 | orchestrator | 2026-02-15 06:54:20.988459 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:54:39.290861 | orchestrator | Sunday 15 February 2026 06:54:20 +0000 (0:00:01.210) 1:00:59.079 ******* 2026-02-15 06:54:39.290998 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:39.291024 | orchestrator | 2026-02-15 06:54:39.291043 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:54:39.291061 | orchestrator | Sunday 15 February 2026 06:54:22 +0000 (0:00:01.287) 1:01:00.367 ******* 2026-02-15 06:54:39.291080 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:39.291100 | orchestrator | 2026-02-15 06:54:39.291119 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:54:39.291138 | orchestrator | Sunday 15 February 2026 06:54:23 +0000 (0:00:01.675) 1:01:02.042 ******* 2026-02-15 06:54:39.291220 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:39.291233 | orchestrator | 2026-02-15 06:54:39.291244 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:54:39.291256 | orchestrator | Sunday 15 February 2026 06:54:25 +0000 (0:00:01.287) 1:01:03.330 ******* 2026-02-15 06:54:39.291267 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:54:39.291304 | orchestrator | 2026-02-15 06:54:39.291316 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:54:39.291327 | orchestrator | Sunday 15 February 2026 06:54:27 +0000 (0:00:01.984) 1:01:05.315 ******* 2026-02-15 06:54:39.291338 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:39.291348 | orchestrator | 2026-02-15 06:54:39.291360 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:54:39.291371 | orchestrator | Sunday 15 February 2026 06:54:28 +0000 (0:00:01.171) 1:01:06.486 ******* 2026-02-15 06:54:39.291382 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:39.291395 | orchestrator | 2026-02-15 06:54:39.291408 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:54:39.291420 | orchestrator | Sunday 15 February 2026 06:54:29 +0000 (0:00:01.181) 1:01:07.668 ******* 2026-02-15 06:54:39.291458 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:39.291472 | orchestrator | 2026-02-15 06:54:39.291484 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:54:39.291496 | orchestrator | Sunday 15 February 2026 06:54:30 +0000 (0:00:01.234) 1:01:08.903 ******* 2026-02-15 06:54:39.291508 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:39.291521 | orchestrator | 2026-02-15 06:54:39.291533 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:54:39.291545 | orchestrator | Sunday 15 February 2026 06:54:31 +0000 (0:00:01.109) 1:01:10.012 ******* 2026-02-15 06:54:39.291558 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:39.291571 | orchestrator | 2026-02-15 06:54:39.291583 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:54:39.291595 | orchestrator | Sunday 15 February 2026 06:54:33 +0000 (0:00:01.187) 1:01:11.200 ******* 2026-02-15 06:54:39.291607 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:39.291619 | orchestrator | 2026-02-15 06:54:39.291632 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:54:39.291645 | orchestrator | Sunday 15 February 2026 06:54:34 +0000 (0:00:01.263) 1:01:12.464 ******* 2026-02-15 06:54:39.291657 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:39.291669 | orchestrator | 2026-02-15 06:54:39.291682 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:54:39.291709 | orchestrator | Sunday 15 February 2026 06:54:35 +0000 (0:00:01.166) 1:01:13.630 ******* 2026-02-15 06:54:39.291722 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:39.291734 | orchestrator | 2026-02-15 06:54:39.291747 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:54:39.291759 | orchestrator | Sunday 15 February 2026 06:54:36 +0000 (0:00:01.185) 1:01:14.816 ******* 2026-02-15 06:54:39.291771 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:39.291784 | orchestrator | 2026-02-15 06:54:39.291796 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:54:39.291808 | orchestrator | Sunday 15 February 2026 06:54:37 +0000 (0:00:01.139) 1:01:15.956 ******* 2026-02-15 06:54:39.291819 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:54:39.291830 | orchestrator | 2026-02-15 06:54:39.291841 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:54:39.291851 | orchestrator | Sunday 15 February 2026 06:54:39 +0000 (0:00:01.179) 1:01:17.135 ******* 2026-02-15 06:54:39.291865 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:54:39.291881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'uuids': ['d94e5f79-6313-45be-bfeb-6c020052505d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d']}})  2026-02-15 06:54:39.291927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30e735a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:54:39.291941 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769']}})  2026-02-15 06:54:39.291953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:54:39.291965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:54:39.291983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:54:39.291995 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:54:39.292006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV', 'dm-uuid-CRYPT-LUKS2-00e62f5af87144e797787951ba7c7c75-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:54:39.292034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:54:41.098517 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'uuids': ['00e62f5a-f871-44e7-9778-7951ba7c7c75'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV']}})  2026-02-15 06:54:41.098615 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55']}})  2026-02-15 06:54:41.098629 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:54:41.098660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cdab0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:54:41.098707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:54:41.098719 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:54:41.098729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d', 'dm-uuid-CRYPT-LUKS2-d94e5f79631345bebfeb6c020052505d-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:54:41.098741 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:54:41.098751 | orchestrator | 2026-02-15 06:54:41.098761 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:54:41.098771 | orchestrator | Sunday 15 February 2026 06:54:40 +0000 (0:00:01.388) 1:01:18.523 ******* 2026-02-15 06:54:41.098785 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:41.098796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55', 'dm-uuid-LVM-o2f9f893FYeBh9VRWDOJqcRLA90B2brL8MFVD72gAZ5o36gNWsXvjFU6tptjB20d'], 'uuids': ['d94e5f79-6313-45be-bfeb-6c020052505d'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:41.098813 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e', 'scsi-SQEMU_QEMU_HARDDISK_b30e735a-b22c-4e42-bb85-734d9c181b6e'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b30e735a', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:41.098829 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-5oVAFw-Nipr-VUTl-U0Wt-Wah1-LtKf-1XCmON', 'scsi-0QEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090', 'scsi-SQEMU_QEMU_HARDDISK_b2a7c6af-0e01-4433-817a-01c5d828c090'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:42.431217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:42.431319 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:42.431354 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-32-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:42.431386 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:42.431398 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV', 'dm-uuid-CRYPT-LUKS2-00e62f5af87144e797787951ba7c7c75-nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:42.431409 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:42.431489 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--11907033--e329--56e1--bf1e--182edc1a3769-osd--block--11907033--e329--56e1--bf1e--182edc1a3769', 'dm-uuid-LVM-XsCgf3chBwzrTktR9QoTw3UC71i7Tvn1nvqAB6pzDqjuxn9fAP7MAneCejl8UpXV'], 'uuids': ['00e62f5a-f871-44e7-9778-7951ba7c7c75'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'b2a7c6af', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['nvqAB6-pzDq-juxn-9fAP-7MAn-eCej-l8UpXV']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:42.431511 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-GNgdgE-U4yn-UjqZ-rFjw-dUou-hOdb-3fwweh', 'scsi-0QEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71', 'scsi-SQEMU_QEMU_HARDDISK_d453eee5-ccb1-47a4-84c4-d84ad638bc71'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd453eee5', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--308eeb04--119e--5b1b--acdb--31959eb9ce55-osd--block--308eeb04--119e--5b1b--acdb--31959eb9ce55']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:42.431534 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:54:42.431558 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '6cdab0dd', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1', 'scsi-SQEMU_QEMU_HARDDISK_6cdab0dd-845d-4482-b01f-950374c91f45-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:55:11.047740 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:55:11.047869 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:55:11.047904 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d', 'dm-uuid-CRYPT-LUKS2-d94e5f79631345bebfeb6c020052505d-8MFVD7-2gAZ-5o36-gNWs-XvjF-U6tp-tjB20d'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:55:11.047917 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.047928 | orchestrator | 2026-02-15 06:55:11.047939 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:55:11.047950 | orchestrator | Sunday 15 February 2026 06:54:42 +0000 (0:00:02.002) 1:01:20.526 ******* 2026-02-15 06:55:11.047960 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:55:11.047970 | orchestrator | 2026-02-15 06:55:11.047980 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:55:11.047989 | orchestrator | Sunday 15 February 2026 06:54:43 +0000 (0:00:01.546) 1:01:22.072 ******* 2026-02-15 06:55:11.047999 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:55:11.048009 | orchestrator | 2026-02-15 06:55:11.048018 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:55:11.048028 | orchestrator | Sunday 15 February 2026 06:54:45 +0000 (0:00:01.176) 1:01:23.248 ******* 2026-02-15 06:55:11.048037 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:55:11.048047 | orchestrator | 2026-02-15 06:55:11.048056 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:55:11.048066 | orchestrator | Sunday 15 February 2026 06:54:46 +0000 (0:00:01.435) 1:01:24.684 ******* 2026-02-15 06:55:11.048075 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.048085 | orchestrator | 2026-02-15 06:55:11.048094 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:55:11.048104 | orchestrator | Sunday 15 February 2026 06:54:47 +0000 (0:00:01.140) 1:01:25.824 ******* 2026-02-15 06:55:11.048113 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.048123 | orchestrator | 2026-02-15 06:55:11.048132 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:55:11.048142 | orchestrator | Sunday 15 February 2026 06:54:49 +0000 (0:00:01.298) 1:01:27.123 ******* 2026-02-15 06:55:11.048151 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.048161 | orchestrator | 2026-02-15 06:55:11.048170 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 06:55:11.048180 | orchestrator | Sunday 15 February 2026 06:54:50 +0000 (0:00:01.150) 1:01:28.274 ******* 2026-02-15 06:55:11.048189 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-02-15 06:55:11.048199 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-02-15 06:55:11.048208 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-02-15 06:55:11.048217 | orchestrator | 2026-02-15 06:55:11.048227 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 06:55:11.048236 | orchestrator | Sunday 15 February 2026 06:54:51 +0000 (0:00:01.748) 1:01:30.022 ******* 2026-02-15 06:55:11.048246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-02-15 06:55:11.048256 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-02-15 06:55:11.048265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-02-15 06:55:11.048281 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.048291 | orchestrator | 2026-02-15 06:55:11.048300 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 06:55:11.048311 | orchestrator | Sunday 15 February 2026 06:54:53 +0000 (0:00:01.183) 1:01:31.206 ******* 2026-02-15 06:55:11.048337 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3 2026-02-15 06:55:11.048348 | orchestrator | 2026-02-15 06:55:11.048359 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:55:11.048370 | orchestrator | Sunday 15 February 2026 06:54:54 +0000 (0:00:01.116) 1:01:32.322 ******* 2026-02-15 06:55:11.048379 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.048389 | orchestrator | 2026-02-15 06:55:11.048398 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:55:11.048408 | orchestrator | Sunday 15 February 2026 06:54:55 +0000 (0:00:01.170) 1:01:33.493 ******* 2026-02-15 06:55:11.048417 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.048426 | orchestrator | 2026-02-15 06:55:11.048436 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:55:11.048450 | orchestrator | Sunday 15 February 2026 06:54:56 +0000 (0:00:01.187) 1:01:34.681 ******* 2026-02-15 06:55:11.048460 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.048469 | orchestrator | 2026-02-15 06:55:11.048479 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:55:11.048514 | orchestrator | Sunday 15 February 2026 06:54:57 +0000 (0:00:01.245) 1:01:35.926 ******* 2026-02-15 06:55:11.048525 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:55:11.048534 | orchestrator | 2026-02-15 06:55:11.048544 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:55:11.048553 | orchestrator | Sunday 15 February 2026 06:54:59 +0000 (0:00:01.308) 1:01:37.235 ******* 2026-02-15 06:55:11.048563 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:55:11.048573 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:55:11.048583 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:55:11.048592 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.048602 | orchestrator | 2026-02-15 06:55:11.048611 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:55:11.048621 | orchestrator | Sunday 15 February 2026 06:55:00 +0000 (0:00:01.491) 1:01:38.726 ******* 2026-02-15 06:55:11.048631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:55:11.048641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:55:11.048650 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:55:11.048660 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.048669 | orchestrator | 2026-02-15 06:55:11.048679 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:55:11.048688 | orchestrator | Sunday 15 February 2026 06:55:02 +0000 (0:00:01.448) 1:01:40.175 ******* 2026-02-15 06:55:11.048698 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:55:11.048707 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:55:11.048717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:55:11.048726 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:55:11.048736 | orchestrator | 2026-02-15 06:55:11.048745 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:55:11.048755 | orchestrator | Sunday 15 February 2026 06:55:03 +0000 (0:00:01.438) 1:01:41.613 ******* 2026-02-15 06:55:11.048765 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:55:11.048774 | orchestrator | 2026-02-15 06:55:11.048784 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:55:11.048793 | orchestrator | Sunday 15 February 2026 06:55:04 +0000 (0:00:01.162) 1:01:42.776 ******* 2026-02-15 06:55:11.048809 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 06:55:11.048819 | orchestrator | 2026-02-15 06:55:11.048828 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 06:55:11.048838 | orchestrator | Sunday 15 February 2026 06:55:06 +0000 (0:00:01.378) 1:01:44.155 ******* 2026-02-15 06:55:11.048848 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:55:11.048857 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:55:11.048867 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:55:11.048876 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-15 06:55:11.048886 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:55:11.048895 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:55:11.048905 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:55:11.048915 | orchestrator | 2026-02-15 06:55:11.048924 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 06:55:11.048934 | orchestrator | Sunday 15 February 2026 06:55:08 +0000 (0:00:02.252) 1:01:46.407 ******* 2026-02-15 06:55:11.048943 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:55:11.048953 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:55:11.048962 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:55:11.048972 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-02-15 06:55:11.048981 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 06:55:11.048991 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 06:55:11.049000 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 06:55:11.049010 | orchestrator | 2026-02-15 06:55:11.049026 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-15 06:56:04.928462 | orchestrator | Sunday 15 February 2026 06:55:11 +0000 (0:00:02.722) 1:01:49.130 ******* 2026-02-15 06:56:04.928695 | orchestrator | changed: [testbed-node-3] 2026-02-15 06:56:04.928725 | orchestrator | 2026-02-15 06:56:04.928745 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-15 06:56:04.928763 | orchestrator | Sunday 15 February 2026 06:55:13 +0000 (0:00:02.237) 1:01:51.367 ******* 2026-02-15 06:56:04.928783 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:56:04.928804 | orchestrator | 2026-02-15 06:56:04.928824 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-15 06:56:04.928843 | orchestrator | Sunday 15 February 2026 06:55:16 +0000 (0:00:03.039) 1:01:54.407 ******* 2026-02-15 06:56:04.928884 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:56:04.928904 | orchestrator | 2026-02-15 06:56:04.928924 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 06:56:04.928943 | orchestrator | Sunday 15 February 2026 06:55:18 +0000 (0:00:02.279) 1:01:56.687 ******* 2026-02-15 06:56:04.928963 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3 2026-02-15 06:56:04.928986 | orchestrator | 2026-02-15 06:56:04.929009 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 06:56:04.929030 | orchestrator | Sunday 15 February 2026 06:55:19 +0000 (0:00:01.264) 1:01:57.951 ******* 2026-02-15 06:56:04.929052 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3 2026-02-15 06:56:04.929103 | orchestrator | 2026-02-15 06:56:04.929125 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 06:56:04.929144 | orchestrator | Sunday 15 February 2026 06:55:20 +0000 (0:00:01.145) 1:01:59.097 ******* 2026-02-15 06:56:04.929163 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.929181 | orchestrator | 2026-02-15 06:56:04.929199 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 06:56:04.929216 | orchestrator | Sunday 15 February 2026 06:55:22 +0000 (0:00:01.156) 1:02:00.254 ******* 2026-02-15 06:56:04.929234 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.929253 | orchestrator | 2026-02-15 06:56:04.929271 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 06:56:04.929288 | orchestrator | Sunday 15 February 2026 06:55:23 +0000 (0:00:01.499) 1:02:01.753 ******* 2026-02-15 06:56:04.929306 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.929324 | orchestrator | 2026-02-15 06:56:04.929342 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 06:56:04.929362 | orchestrator | Sunday 15 February 2026 06:55:25 +0000 (0:00:01.624) 1:02:03.377 ******* 2026-02-15 06:56:04.929381 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.929399 | orchestrator | 2026-02-15 06:56:04.929416 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 06:56:04.929433 | orchestrator | Sunday 15 February 2026 06:55:26 +0000 (0:00:01.541) 1:02:04.919 ******* 2026-02-15 06:56:04.929451 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.929470 | orchestrator | 2026-02-15 06:56:04.929488 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 06:56:04.929534 | orchestrator | Sunday 15 February 2026 06:55:28 +0000 (0:00:01.219) 1:02:06.139 ******* 2026-02-15 06:56:04.929554 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.929602 | orchestrator | 2026-02-15 06:56:04.929622 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 06:56:04.929637 | orchestrator | Sunday 15 February 2026 06:55:29 +0000 (0:00:01.128) 1:02:07.267 ******* 2026-02-15 06:56:04.929648 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.929659 | orchestrator | 2026-02-15 06:56:04.929669 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 06:56:04.929680 | orchestrator | Sunday 15 February 2026 06:55:30 +0000 (0:00:01.262) 1:02:08.530 ******* 2026-02-15 06:56:04.929690 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.929701 | orchestrator | 2026-02-15 06:56:04.929711 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 06:56:04.929722 | orchestrator | Sunday 15 February 2026 06:55:31 +0000 (0:00:01.546) 1:02:10.077 ******* 2026-02-15 06:56:04.929733 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.929743 | orchestrator | 2026-02-15 06:56:04.929752 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 06:56:04.929762 | orchestrator | Sunday 15 February 2026 06:55:33 +0000 (0:00:01.572) 1:02:11.649 ******* 2026-02-15 06:56:04.929771 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.929781 | orchestrator | 2026-02-15 06:56:04.929790 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 06:56:04.929800 | orchestrator | Sunday 15 February 2026 06:55:34 +0000 (0:00:01.207) 1:02:12.857 ******* 2026-02-15 06:56:04.929809 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.929818 | orchestrator | 2026-02-15 06:56:04.929828 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 06:56:04.929837 | orchestrator | Sunday 15 February 2026 06:55:35 +0000 (0:00:01.138) 1:02:13.996 ******* 2026-02-15 06:56:04.929847 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.929856 | orchestrator | 2026-02-15 06:56:04.929866 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 06:56:04.929875 | orchestrator | Sunday 15 February 2026 06:55:37 +0000 (0:00:01.235) 1:02:15.231 ******* 2026-02-15 06:56:04.929885 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.929908 | orchestrator | 2026-02-15 06:56:04.929918 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 06:56:04.929927 | orchestrator | Sunday 15 February 2026 06:55:38 +0000 (0:00:01.212) 1:02:16.443 ******* 2026-02-15 06:56:04.929936 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.929946 | orchestrator | 2026-02-15 06:56:04.929977 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 06:56:04.929987 | orchestrator | Sunday 15 February 2026 06:55:39 +0000 (0:00:01.167) 1:02:17.611 ******* 2026-02-15 06:56:04.929996 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930006 | orchestrator | 2026-02-15 06:56:04.930075 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 06:56:04.930087 | orchestrator | Sunday 15 February 2026 06:55:40 +0000 (0:00:01.184) 1:02:18.796 ******* 2026-02-15 06:56:04.930096 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930106 | orchestrator | 2026-02-15 06:56:04.930115 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 06:56:04.930125 | orchestrator | Sunday 15 February 2026 06:55:41 +0000 (0:00:01.123) 1:02:19.919 ******* 2026-02-15 06:56:04.930134 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930144 | orchestrator | 2026-02-15 06:56:04.930153 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 06:56:04.930172 | orchestrator | Sunday 15 February 2026 06:55:43 +0000 (0:00:01.203) 1:02:21.123 ******* 2026-02-15 06:56:04.930183 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.930192 | orchestrator | 2026-02-15 06:56:04.930202 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 06:56:04.930211 | orchestrator | Sunday 15 February 2026 06:55:44 +0000 (0:00:01.168) 1:02:22.291 ******* 2026-02-15 06:56:04.930221 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.930230 | orchestrator | 2026-02-15 06:56:04.930240 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 06:56:04.930249 | orchestrator | Sunday 15 February 2026 06:55:45 +0000 (0:00:01.338) 1:02:23.630 ******* 2026-02-15 06:56:04.930259 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930268 | orchestrator | 2026-02-15 06:56:04.930278 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 06:56:04.930287 | orchestrator | Sunday 15 February 2026 06:55:46 +0000 (0:00:01.106) 1:02:24.736 ******* 2026-02-15 06:56:04.930297 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930307 | orchestrator | 2026-02-15 06:56:04.930316 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 06:56:04.930326 | orchestrator | Sunday 15 February 2026 06:55:47 +0000 (0:00:01.182) 1:02:25.919 ******* 2026-02-15 06:56:04.930335 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930345 | orchestrator | 2026-02-15 06:56:04.930354 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 06:56:04.930364 | orchestrator | Sunday 15 February 2026 06:55:49 +0000 (0:00:01.259) 1:02:27.178 ******* 2026-02-15 06:56:04.930373 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930383 | orchestrator | 2026-02-15 06:56:04.930392 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 06:56:04.930422 | orchestrator | Sunday 15 February 2026 06:55:50 +0000 (0:00:01.193) 1:02:28.372 ******* 2026-02-15 06:56:04.930431 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930441 | orchestrator | 2026-02-15 06:56:04.930452 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 06:56:04.930468 | orchestrator | Sunday 15 February 2026 06:55:51 +0000 (0:00:01.154) 1:02:29.526 ******* 2026-02-15 06:56:04.930499 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930516 | orchestrator | 2026-02-15 06:56:04.930532 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 06:56:04.930549 | orchestrator | Sunday 15 February 2026 06:55:52 +0000 (0:00:01.177) 1:02:30.704 ******* 2026-02-15 06:56:04.930590 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930635 | orchestrator | 2026-02-15 06:56:04.930653 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 06:56:04.930670 | orchestrator | Sunday 15 February 2026 06:55:53 +0000 (0:00:01.184) 1:02:31.889 ******* 2026-02-15 06:56:04.930686 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930703 | orchestrator | 2026-02-15 06:56:04.930720 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 06:56:04.930735 | orchestrator | Sunday 15 February 2026 06:55:54 +0000 (0:00:01.134) 1:02:33.023 ******* 2026-02-15 06:56:04.930749 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930758 | orchestrator | 2026-02-15 06:56:04.930768 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 06:56:04.930777 | orchestrator | Sunday 15 February 2026 06:55:56 +0000 (0:00:01.126) 1:02:34.150 ******* 2026-02-15 06:56:04.930786 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930796 | orchestrator | 2026-02-15 06:56:04.930805 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 06:56:04.930815 | orchestrator | Sunday 15 February 2026 06:55:57 +0000 (0:00:01.141) 1:02:35.291 ******* 2026-02-15 06:56:04.930824 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930834 | orchestrator | 2026-02-15 06:56:04.930843 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 06:56:04.930852 | orchestrator | Sunday 15 February 2026 06:55:58 +0000 (0:00:01.140) 1:02:36.432 ******* 2026-02-15 06:56:04.930862 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:04.930876 | orchestrator | 2026-02-15 06:56:04.930893 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 06:56:04.930908 | orchestrator | Sunday 15 February 2026 06:55:59 +0000 (0:00:01.129) 1:02:37.561 ******* 2026-02-15 06:56:04.930923 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.930939 | orchestrator | 2026-02-15 06:56:04.930954 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 06:56:04.930968 | orchestrator | Sunday 15 February 2026 06:56:01 +0000 (0:00:01.900) 1:02:39.462 ******* 2026-02-15 06:56:04.931004 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:04.931023 | orchestrator | 2026-02-15 06:56:04.931038 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 06:56:04.931048 | orchestrator | Sunday 15 February 2026 06:56:03 +0000 (0:00:02.230) 1:02:41.692 ******* 2026-02-15 06:56:04.931057 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3 2026-02-15 06:56:04.931067 | orchestrator | 2026-02-15 06:56:04.931077 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 06:56:04.931099 | orchestrator | Sunday 15 February 2026 06:56:04 +0000 (0:00:01.327) 1:02:43.020 ******* 2026-02-15 06:56:51.736251 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.736368 | orchestrator | 2026-02-15 06:56:51.736388 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 06:56:51.736402 | orchestrator | Sunday 15 February 2026 06:56:06 +0000 (0:00:01.176) 1:02:44.197 ******* 2026-02-15 06:56:51.736413 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.736425 | orchestrator | 2026-02-15 06:56:51.736436 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 06:56:51.736447 | orchestrator | Sunday 15 February 2026 06:56:07 +0000 (0:00:01.156) 1:02:45.353 ******* 2026-02-15 06:56:51.736458 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 06:56:51.736469 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 06:56:51.736480 | orchestrator | 2026-02-15 06:56:51.736507 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 06:56:51.736518 | orchestrator | Sunday 15 February 2026 06:56:09 +0000 (0:00:01.866) 1:02:47.220 ******* 2026-02-15 06:56:51.736529 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:51.736540 | orchestrator | 2026-02-15 06:56:51.736576 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 06:56:51.736587 | orchestrator | Sunday 15 February 2026 06:56:10 +0000 (0:00:01.486) 1:02:48.707 ******* 2026-02-15 06:56:51.736598 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.736609 | orchestrator | 2026-02-15 06:56:51.736619 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 06:56:51.736656 | orchestrator | Sunday 15 February 2026 06:56:11 +0000 (0:00:01.127) 1:02:49.835 ******* 2026-02-15 06:56:51.736667 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.736678 | orchestrator | 2026-02-15 06:56:51.736689 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 06:56:51.736699 | orchestrator | Sunday 15 February 2026 06:56:12 +0000 (0:00:01.215) 1:02:51.050 ******* 2026-02-15 06:56:51.736709 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.736721 | orchestrator | 2026-02-15 06:56:51.736732 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 06:56:51.736743 | orchestrator | Sunday 15 February 2026 06:56:14 +0000 (0:00:01.197) 1:02:52.248 ******* 2026-02-15 06:56:51.736753 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3 2026-02-15 06:56:51.736765 | orchestrator | 2026-02-15 06:56:51.736779 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 06:56:51.736791 | orchestrator | Sunday 15 February 2026 06:56:15 +0000 (0:00:01.182) 1:02:53.431 ******* 2026-02-15 06:56:51.736804 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:51.736816 | orchestrator | 2026-02-15 06:56:51.736828 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 06:56:51.736841 | orchestrator | Sunday 15 February 2026 06:56:17 +0000 (0:00:01.680) 1:02:55.111 ******* 2026-02-15 06:56:51.736853 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 06:56:51.736865 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 06:56:51.736877 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 06:56:51.736890 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.736902 | orchestrator | 2026-02-15 06:56:51.736913 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 06:56:51.736925 | orchestrator | Sunday 15 February 2026 06:56:18 +0000 (0:00:01.156) 1:02:56.268 ******* 2026-02-15 06:56:51.736937 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.736950 | orchestrator | 2026-02-15 06:56:51.736962 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 06:56:51.736974 | orchestrator | Sunday 15 February 2026 06:56:19 +0000 (0:00:01.126) 1:02:57.394 ******* 2026-02-15 06:56:51.736986 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.736998 | orchestrator | 2026-02-15 06:56:51.737011 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 06:56:51.737023 | orchestrator | Sunday 15 February 2026 06:56:20 +0000 (0:00:01.269) 1:02:58.663 ******* 2026-02-15 06:56:51.737035 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737047 | orchestrator | 2026-02-15 06:56:51.737059 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 06:56:51.737070 | orchestrator | Sunday 15 February 2026 06:56:21 +0000 (0:00:01.132) 1:02:59.796 ******* 2026-02-15 06:56:51.737082 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737094 | orchestrator | 2026-02-15 06:56:51.737107 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 06:56:51.737119 | orchestrator | Sunday 15 February 2026 06:56:22 +0000 (0:00:01.156) 1:03:00.952 ******* 2026-02-15 06:56:51.737131 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737141 | orchestrator | 2026-02-15 06:56:51.737152 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 06:56:51.737163 | orchestrator | Sunday 15 February 2026 06:56:24 +0000 (0:00:01.182) 1:03:02.135 ******* 2026-02-15 06:56:51.737182 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:51.737193 | orchestrator | 2026-02-15 06:56:51.737204 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 06:56:51.737214 | orchestrator | Sunday 15 February 2026 06:56:26 +0000 (0:00:02.441) 1:03:04.576 ******* 2026-02-15 06:56:51.737225 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:51.737235 | orchestrator | 2026-02-15 06:56:51.737246 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 06:56:51.737256 | orchestrator | Sunday 15 February 2026 06:56:27 +0000 (0:00:01.142) 1:03:05.719 ******* 2026-02-15 06:56:51.737267 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3 2026-02-15 06:56:51.737278 | orchestrator | 2026-02-15 06:56:51.737289 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 06:56:51.737316 | orchestrator | Sunday 15 February 2026 06:56:28 +0000 (0:00:01.124) 1:03:06.843 ******* 2026-02-15 06:56:51.737327 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737338 | orchestrator | 2026-02-15 06:56:51.737349 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 06:56:51.737360 | orchestrator | Sunday 15 February 2026 06:56:29 +0000 (0:00:01.157) 1:03:08.001 ******* 2026-02-15 06:56:51.737370 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737381 | orchestrator | 2026-02-15 06:56:51.737392 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 06:56:51.737402 | orchestrator | Sunday 15 February 2026 06:56:31 +0000 (0:00:01.198) 1:03:09.200 ******* 2026-02-15 06:56:51.737413 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737423 | orchestrator | 2026-02-15 06:56:51.737434 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 06:56:51.737450 | orchestrator | Sunday 15 February 2026 06:56:32 +0000 (0:00:01.159) 1:03:10.360 ******* 2026-02-15 06:56:51.737461 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737472 | orchestrator | 2026-02-15 06:56:51.737482 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 06:56:51.737492 | orchestrator | Sunday 15 February 2026 06:56:33 +0000 (0:00:01.169) 1:03:11.530 ******* 2026-02-15 06:56:51.737503 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737514 | orchestrator | 2026-02-15 06:56:51.737524 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 06:56:51.737535 | orchestrator | Sunday 15 February 2026 06:56:34 +0000 (0:00:01.164) 1:03:12.694 ******* 2026-02-15 06:56:51.737545 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737556 | orchestrator | 2026-02-15 06:56:51.737566 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 06:56:51.737577 | orchestrator | Sunday 15 February 2026 06:56:35 +0000 (0:00:01.181) 1:03:13.876 ******* 2026-02-15 06:56:51.737681 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737696 | orchestrator | 2026-02-15 06:56:51.737707 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 06:56:51.737718 | orchestrator | Sunday 15 February 2026 06:56:36 +0000 (0:00:01.181) 1:03:15.057 ******* 2026-02-15 06:56:51.737729 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:56:51.737739 | orchestrator | 2026-02-15 06:56:51.737750 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 06:56:51.737761 | orchestrator | Sunday 15 February 2026 06:56:38 +0000 (0:00:01.158) 1:03:16.216 ******* 2026-02-15 06:56:51.737771 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:56:51.737782 | orchestrator | 2026-02-15 06:56:51.737793 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 06:56:51.737804 | orchestrator | Sunday 15 February 2026 06:56:39 +0000 (0:00:01.142) 1:03:17.358 ******* 2026-02-15 06:56:51.737814 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3 2026-02-15 06:56:51.737825 | orchestrator | 2026-02-15 06:56:51.737836 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 06:56:51.737856 | orchestrator | Sunday 15 February 2026 06:56:40 +0000 (0:00:01.200) 1:03:18.558 ******* 2026-02-15 06:56:51.737867 | orchestrator | ok: [testbed-node-3] => (item=/etc/ceph) 2026-02-15 06:56:51.737879 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/) 2026-02-15 06:56:51.737889 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-02-15 06:56:51.737900 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-02-15 06:56:51.737911 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-02-15 06:56:51.737922 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-02-15 06:56:51.737933 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-02-15 06:56:51.737943 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-02-15 06:56:51.737954 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 06:56:51.737965 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 06:56:51.737975 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 06:56:51.737986 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 06:56:51.737997 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 06:56:51.738008 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 06:56:51.738076 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2026-02-15 06:56:51.738088 | orchestrator | ok: [testbed-node-3] => (item=/var/log/ceph) 2026-02-15 06:56:51.738099 | orchestrator | 2026-02-15 06:56:51.738110 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 06:56:51.738121 | orchestrator | Sunday 15 February 2026 06:56:46 +0000 (0:00:06.519) 1:03:25.078 ******* 2026-02-15 06:56:51.738151 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3 2026-02-15 06:56:51.738170 | orchestrator | 2026-02-15 06:56:51.738188 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-15 06:56:51.738205 | orchestrator | Sunday 15 February 2026 06:56:48 +0000 (0:00:01.161) 1:03:26.240 ******* 2026-02-15 06:56:51.738223 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:56:51.738243 | orchestrator | 2026-02-15 06:56:51.738261 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-15 06:56:51.738278 | orchestrator | Sunday 15 February 2026 06:56:49 +0000 (0:00:01.559) 1:03:27.800 ******* 2026-02-15 06:56:51.738298 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:56:51.738316 | orchestrator | 2026-02-15 06:56:51.738334 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 06:56:51.738368 | orchestrator | Sunday 15 February 2026 06:56:51 +0000 (0:00:02.025) 1:03:29.825 ******* 2026-02-15 06:57:42.810920 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811037 | orchestrator | 2026-02-15 06:57:42.811055 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 06:57:42.811067 | orchestrator | Sunday 15 February 2026 06:56:52 +0000 (0:00:01.181) 1:03:31.007 ******* 2026-02-15 06:57:42.811079 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811090 | orchestrator | 2026-02-15 06:57:42.811100 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 06:57:42.811111 | orchestrator | Sunday 15 February 2026 06:56:54 +0000 (0:00:01.330) 1:03:32.337 ******* 2026-02-15 06:57:42.811122 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811133 | orchestrator | 2026-02-15 06:57:42.811143 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 06:57:42.811171 | orchestrator | Sunday 15 February 2026 06:56:55 +0000 (0:00:01.148) 1:03:33.487 ******* 2026-02-15 06:57:42.811183 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811218 | orchestrator | 2026-02-15 06:57:42.811230 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 06:57:42.811241 | orchestrator | Sunday 15 February 2026 06:56:56 +0000 (0:00:01.177) 1:03:34.664 ******* 2026-02-15 06:57:42.811252 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811262 | orchestrator | 2026-02-15 06:57:42.811273 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 06:57:42.811285 | orchestrator | Sunday 15 February 2026 06:56:57 +0000 (0:00:01.151) 1:03:35.815 ******* 2026-02-15 06:57:42.811296 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811306 | orchestrator | 2026-02-15 06:57:42.811317 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 06:57:42.811327 | orchestrator | Sunday 15 February 2026 06:56:58 +0000 (0:00:01.135) 1:03:36.951 ******* 2026-02-15 06:57:42.811338 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811348 | orchestrator | 2026-02-15 06:57:42.811359 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 06:57:42.811369 | orchestrator | Sunday 15 February 2026 06:56:59 +0000 (0:00:01.127) 1:03:38.079 ******* 2026-02-15 06:57:42.811380 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811390 | orchestrator | 2026-02-15 06:57:42.811401 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 06:57:42.811411 | orchestrator | Sunday 15 February 2026 06:57:01 +0000 (0:00:01.127) 1:03:39.206 ******* 2026-02-15 06:57:42.811422 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811432 | orchestrator | 2026-02-15 06:57:42.811443 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 06:57:42.811456 | orchestrator | Sunday 15 February 2026 06:57:02 +0000 (0:00:01.134) 1:03:40.340 ******* 2026-02-15 06:57:42.811469 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811481 | orchestrator | 2026-02-15 06:57:42.811494 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 06:57:42.811506 | orchestrator | Sunday 15 February 2026 06:57:03 +0000 (0:00:01.141) 1:03:41.481 ******* 2026-02-15 06:57:42.811519 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811531 | orchestrator | 2026-02-15 06:57:42.811543 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 06:57:42.811555 | orchestrator | Sunday 15 February 2026 06:57:04 +0000 (0:00:01.143) 1:03:42.625 ******* 2026-02-15 06:57:42.811567 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-02-15 06:57:42.811580 | orchestrator | 2026-02-15 06:57:42.811592 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 06:57:42.811605 | orchestrator | Sunday 15 February 2026 06:57:08 +0000 (0:00:04.439) 1:03:47.065 ******* 2026-02-15 06:57:42.811618 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:57:42.811632 | orchestrator | 2026-02-15 06:57:42.811645 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 06:57:42.811657 | orchestrator | Sunday 15 February 2026 06:57:10 +0000 (0:00:01.186) 1:03:48.251 ******* 2026-02-15 06:57:42.811672 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}]) 2026-02-15 06:57:42.811688 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}]) 2026-02-15 06:57:42.811729 | orchestrator | 2026-02-15 06:57:42.811750 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 06:57:42.811763 | orchestrator | Sunday 15 February 2026 06:57:15 +0000 (0:00:05.243) 1:03:53.495 ******* 2026-02-15 06:57:42.811775 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811787 | orchestrator | 2026-02-15 06:57:42.811799 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 06:57:42.811810 | orchestrator | Sunday 15 February 2026 06:57:16 +0000 (0:00:01.188) 1:03:54.684 ******* 2026-02-15 06:57:42.811821 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811831 | orchestrator | 2026-02-15 06:57:42.811842 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 06:57:42.811870 | orchestrator | Sunday 15 February 2026 06:57:17 +0000 (0:00:01.163) 1:03:55.847 ******* 2026-02-15 06:57:42.811881 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811892 | orchestrator | 2026-02-15 06:57:42.811903 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 06:57:42.811913 | orchestrator | Sunday 15 February 2026 06:57:18 +0000 (0:00:01.180) 1:03:57.028 ******* 2026-02-15 06:57:42.811924 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811934 | orchestrator | 2026-02-15 06:57:42.811945 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 06:57:42.811955 | orchestrator | Sunday 15 February 2026 06:57:20 +0000 (0:00:01.204) 1:03:58.232 ******* 2026-02-15 06:57:42.811966 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.811976 | orchestrator | 2026-02-15 06:57:42.811987 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 06:57:42.812003 | orchestrator | Sunday 15 February 2026 06:57:21 +0000 (0:00:01.190) 1:03:59.423 ******* 2026-02-15 06:57:42.812014 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:57:42.812026 | orchestrator | 2026-02-15 06:57:42.812036 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 06:57:42.812047 | orchestrator | Sunday 15 February 2026 06:57:22 +0000 (0:00:01.282) 1:04:00.705 ******* 2026-02-15 06:57:42.812058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:57:42.812068 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:57:42.812079 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:57:42.812090 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.812100 | orchestrator | 2026-02-15 06:57:42.812111 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 06:57:42.812121 | orchestrator | Sunday 15 February 2026 06:57:24 +0000 (0:00:01.579) 1:04:02.285 ******* 2026-02-15 06:57:42.812132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:57:42.812142 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:57:42.812153 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:57:42.812163 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.812174 | orchestrator | 2026-02-15 06:57:42.812184 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 06:57:42.812195 | orchestrator | Sunday 15 February 2026 06:57:25 +0000 (0:00:01.505) 1:04:03.790 ******* 2026-02-15 06:57:42.812206 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-02-15 06:57:42.812216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-02-15 06:57:42.812227 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-02-15 06:57:42.812237 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.812248 | orchestrator | 2026-02-15 06:57:42.812258 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 06:57:42.812269 | orchestrator | Sunday 15 February 2026 06:57:27 +0000 (0:00:01.493) 1:04:05.284 ******* 2026-02-15 06:57:42.812280 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:57:42.812291 | orchestrator | 2026-02-15 06:57:42.812301 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 06:57:42.812319 | orchestrator | Sunday 15 February 2026 06:57:28 +0000 (0:00:01.194) 1:04:06.478 ******* 2026-02-15 06:57:42.812329 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-02-15 06:57:42.812340 | orchestrator | 2026-02-15 06:57:42.812350 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 06:57:42.812361 | orchestrator | Sunday 15 February 2026 06:57:29 +0000 (0:00:01.391) 1:04:07.870 ******* 2026-02-15 06:57:42.812372 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:57:42.812382 | orchestrator | 2026-02-15 06:57:42.812393 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-15 06:57:42.812404 | orchestrator | Sunday 15 February 2026 06:57:31 +0000 (0:00:01.740) 1:04:09.611 ******* 2026-02-15 06:57:42.812414 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3 2026-02-15 06:57:42.812425 | orchestrator | 2026-02-15 06:57:42.812435 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-15 06:57:42.812446 | orchestrator | Sunday 15 February 2026 06:57:33 +0000 (0:00:01.667) 1:04:11.279 ******* 2026-02-15 06:57:42.812456 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:57:42.812467 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 06:57:42.812478 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 06:57:42.812488 | orchestrator | 2026-02-15 06:57:42.812499 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-15 06:57:42.812509 | orchestrator | Sunday 15 February 2026 06:57:36 +0000 (0:00:03.326) 1:04:14.605 ******* 2026-02-15 06:57:42.812520 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-15 06:57:42.812530 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-02-15 06:57:42.812541 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:57:42.812552 | orchestrator | 2026-02-15 06:57:42.812562 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-15 06:57:42.812573 | orchestrator | Sunday 15 February 2026 06:57:38 +0000 (0:00:01.973) 1:04:16.579 ******* 2026-02-15 06:57:42.812583 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:57:42.812594 | orchestrator | 2026-02-15 06:57:42.812604 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-15 06:57:42.812615 | orchestrator | Sunday 15 February 2026 06:57:39 +0000 (0:00:01.134) 1:04:17.713 ******* 2026-02-15 06:57:42.812626 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3 2026-02-15 06:57:42.812637 | orchestrator | 2026-02-15 06:57:42.812647 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-15 06:57:42.812658 | orchestrator | Sunday 15 February 2026 06:57:41 +0000 (0:00:01.545) 1:04:19.259 ******* 2026-02-15 06:57:42.812675 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:58:58.345303 | orchestrator | 2026-02-15 06:58:58.345414 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-15 06:58:58.345428 | orchestrator | Sunday 15 February 2026 06:57:42 +0000 (0:00:01.642) 1:04:20.901 ******* 2026-02-15 06:58:58.345438 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:58:58.345449 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-15 06:58:58.345458 | orchestrator | 2026-02-15 06:58:58.345467 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-15 06:58:58.345489 | orchestrator | Sunday 15 February 2026 06:57:47 +0000 (0:00:05.195) 1:04:26.097 ******* 2026-02-15 06:58:58.345498 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 06:58:58.345508 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 06:58:58.345517 | orchestrator | 2026-02-15 06:58:58.345545 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-15 06:58:58.345555 | orchestrator | Sunday 15 February 2026 06:57:51 +0000 (0:00:03.257) 1:04:29.355 ******* 2026-02-15 06:58:58.345563 | orchestrator | ok: [testbed-node-3] => (item=None) 2026-02-15 06:58:58.345572 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:58:58.345582 | orchestrator | 2026-02-15 06:58:58.345590 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-15 06:58:58.345599 | orchestrator | Sunday 15 February 2026 06:57:53 +0000 (0:00:02.036) 1:04:31.392 ******* 2026-02-15 06:58:58.345607 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-02-15 06:58:58.345616 | orchestrator | 2026-02-15 06:58:58.345624 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-15 06:58:58.345633 | orchestrator | Sunday 15 February 2026 06:57:54 +0000 (0:00:01.666) 1:04:33.059 ******* 2026-02-15 06:58:58.345642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 06:58:58.345651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 06:58:58.345659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 06:58:58.345668 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 06:58:58.345677 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 06:58:58.345686 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:58:58.345694 | orchestrator | 2026-02-15 06:58:58.345703 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-15 06:58:58.345711 | orchestrator | Sunday 15 February 2026 06:57:56 +0000 (0:00:01.656) 1:04:34.715 ******* 2026-02-15 06:58:58.345720 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 06:58:58.345729 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 06:58:58.345737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 06:58:58.345749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 06:58:58.345770 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 06:58:58.345831 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:58:58.345848 | orchestrator | 2026-02-15 06:58:58.345861 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-15 06:58:58.345875 | orchestrator | Sunday 15 February 2026 06:57:58 +0000 (0:00:01.664) 1:04:36.379 ******* 2026-02-15 06:58:58.345890 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 06:58:58.345907 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 06:58:58.345922 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 06:58:58.345937 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 06:58:58.345955 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 06:58:58.345983 | orchestrator | 2026-02-15 06:58:58.345999 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-15 06:58:58.346099 | orchestrator | Sunday 15 February 2026 06:58:30 +0000 (0:00:32.216) 1:05:08.595 ******* 2026-02-15 06:58:58.346123 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:58:58.346146 | orchestrator | 2026-02-15 06:58:58.346161 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-15 06:58:58.346176 | orchestrator | Sunday 15 February 2026 06:58:31 +0000 (0:00:01.114) 1:05:09.709 ******* 2026-02-15 06:58:58.346192 | orchestrator | skipping: [testbed-node-3] 2026-02-15 06:58:58.346206 | orchestrator | 2026-02-15 06:58:58.346220 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-15 06:58:58.346230 | orchestrator | Sunday 15 February 2026 06:58:32 +0000 (0:00:01.122) 1:05:10.832 ******* 2026-02-15 06:58:58.346239 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3 2026-02-15 06:58:58.346247 | orchestrator | 2026-02-15 06:58:58.346263 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-15 06:58:58.346272 | orchestrator | Sunday 15 February 2026 06:58:34 +0000 (0:00:01.466) 1:05:12.299 ******* 2026-02-15 06:58:58.346281 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3 2026-02-15 06:58:58.346290 | orchestrator | 2026-02-15 06:58:58.346298 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-15 06:58:58.346307 | orchestrator | Sunday 15 February 2026 06:58:35 +0000 (0:00:01.518) 1:05:13.817 ******* 2026-02-15 06:58:58.346315 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:58:58.346324 | orchestrator | 2026-02-15 06:58:58.346332 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-15 06:58:58.346341 | orchestrator | Sunday 15 February 2026 06:58:37 +0000 (0:00:02.080) 1:05:15.897 ******* 2026-02-15 06:58:58.346350 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:58:58.346358 | orchestrator | 2026-02-15 06:58:58.346367 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-15 06:58:58.346375 | orchestrator | Sunday 15 February 2026 06:58:39 +0000 (0:00:01.968) 1:05:17.866 ******* 2026-02-15 06:58:58.346384 | orchestrator | ok: [testbed-node-3] 2026-02-15 06:58:58.346392 | orchestrator | 2026-02-15 06:58:58.346401 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-15 06:58:58.346409 | orchestrator | Sunday 15 February 2026 06:58:42 +0000 (0:00:02.256) 1:05:20.123 ******* 2026-02-15 06:58:58.346419 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-02-15 06:58:58.346427 | orchestrator | 2026-02-15 06:58:58.346436 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-15 06:58:58.346444 | orchestrator | 2026-02-15 06:58:58.346453 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 06:58:58.346462 | orchestrator | Sunday 15 February 2026 06:58:45 +0000 (0:00:03.124) 1:05:23.248 ******* 2026-02-15 06:58:58.346470 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-4 2026-02-15 06:58:58.346479 | orchestrator | 2026-02-15 06:58:58.346487 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 06:58:58.346496 | orchestrator | Sunday 15 February 2026 06:58:46 +0000 (0:00:01.122) 1:05:24.371 ******* 2026-02-15 06:58:58.346504 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:58:58.346513 | orchestrator | 2026-02-15 06:58:58.346521 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 06:58:58.346530 | orchestrator | Sunday 15 February 2026 06:58:47 +0000 (0:00:01.465) 1:05:25.837 ******* 2026-02-15 06:58:58.346538 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:58:58.346547 | orchestrator | 2026-02-15 06:58:58.346556 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 06:58:58.346573 | orchestrator | Sunday 15 February 2026 06:58:48 +0000 (0:00:01.129) 1:05:26.966 ******* 2026-02-15 06:58:58.346581 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:58:58.346590 | orchestrator | 2026-02-15 06:58:58.346598 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 06:58:58.346607 | orchestrator | Sunday 15 February 2026 06:58:50 +0000 (0:00:01.449) 1:05:28.416 ******* 2026-02-15 06:58:58.346616 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:58:58.346624 | orchestrator | 2026-02-15 06:58:58.346633 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 06:58:58.346641 | orchestrator | Sunday 15 February 2026 06:58:51 +0000 (0:00:01.143) 1:05:29.560 ******* 2026-02-15 06:58:58.346650 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:58:58.346658 | orchestrator | 2026-02-15 06:58:58.346667 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 06:58:58.346676 | orchestrator | Sunday 15 February 2026 06:58:52 +0000 (0:00:01.173) 1:05:30.733 ******* 2026-02-15 06:58:58.346684 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:58:58.346693 | orchestrator | 2026-02-15 06:58:58.346701 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 06:58:58.346710 | orchestrator | Sunday 15 February 2026 06:58:53 +0000 (0:00:01.287) 1:05:32.021 ******* 2026-02-15 06:58:58.346719 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:58:58.346727 | orchestrator | 2026-02-15 06:58:58.346736 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 06:58:58.346744 | orchestrator | Sunday 15 February 2026 06:58:55 +0000 (0:00:01.221) 1:05:33.242 ******* 2026-02-15 06:58:58.346753 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:58:58.346761 | orchestrator | 2026-02-15 06:58:58.346770 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 06:58:58.346779 | orchestrator | Sunday 15 February 2026 06:58:56 +0000 (0:00:01.148) 1:05:34.390 ******* 2026-02-15 06:58:58.346813 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:58:58.346829 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:58:58.346845 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:58:58.346860 | orchestrator | 2026-02-15 06:58:58.346871 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 06:58:58.346888 | orchestrator | Sunday 15 February 2026 06:58:58 +0000 (0:00:02.046) 1:05:36.437 ******* 2026-02-15 06:59:23.464471 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:59:23.464588 | orchestrator | 2026-02-15 06:59:23.464604 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 06:59:23.464617 | orchestrator | Sunday 15 February 2026 06:58:59 +0000 (0:00:01.282) 1:05:37.719 ******* 2026-02-15 06:59:23.464629 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 06:59:23.464641 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 06:59:23.464652 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 06:59:23.464663 | orchestrator | 2026-02-15 06:59:23.464689 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 06:59:23.464700 | orchestrator | Sunday 15 February 2026 06:59:02 +0000 (0:00:02.912) 1:05:40.632 ******* 2026-02-15 06:59:23.464712 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-15 06:59:23.464722 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-15 06:59:23.464733 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-15 06:59:23.464743 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:23.464754 | orchestrator | 2026-02-15 06:59:23.464765 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 06:59:23.464775 | orchestrator | Sunday 15 February 2026 06:59:04 +0000 (0:00:01.495) 1:05:42.127 ******* 2026-02-15 06:59:23.464885 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 06:59:23.464905 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 06:59:23.464916 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 06:59:23.464927 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:23.464938 | orchestrator | 2026-02-15 06:59:23.464949 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 06:59:23.464960 | orchestrator | Sunday 15 February 2026 06:59:05 +0000 (0:00:01.620) 1:05:43.748 ******* 2026-02-15 06:59:23.464972 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:23.464985 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:23.464996 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:23.465010 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:23.465022 | orchestrator | 2026-02-15 06:59:23.465036 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 06:59:23.465048 | orchestrator | Sunday 15 February 2026 06:59:06 +0000 (0:00:01.234) 1:05:44.982 ******* 2026-02-15 06:59:23.465081 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 06:59:00.178428', 'end': '2026-02-15 06:59:00.227612', 'delta': '0:00:00.049184', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 06:59:23.465104 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 06:59:00.775465', 'end': '2026-02-15 06:59:00.823404', 'delta': '0:00:00.047939', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 06:59:23.465128 | orchestrator | ok: [testbed-node-4] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 06:59:01.339537', 'end': '2026-02-15 06:59:01.389751', 'delta': '0:00:00.050214', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 06:59:23.465140 | orchestrator | 2026-02-15 06:59:23.465153 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 06:59:23.465165 | orchestrator | Sunday 15 February 2026 06:59:08 +0000 (0:00:01.212) 1:05:46.195 ******* 2026-02-15 06:59:23.465178 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:59:23.465190 | orchestrator | 2026-02-15 06:59:23.465204 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 06:59:23.465223 | orchestrator | Sunday 15 February 2026 06:59:09 +0000 (0:00:01.262) 1:05:47.458 ******* 2026-02-15 06:59:23.465244 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:23.465264 | orchestrator | 2026-02-15 06:59:23.465285 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 06:59:23.465306 | orchestrator | Sunday 15 February 2026 06:59:10 +0000 (0:00:01.303) 1:05:48.762 ******* 2026-02-15 06:59:23.465326 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:59:23.465348 | orchestrator | 2026-02-15 06:59:23.465368 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 06:59:23.465382 | orchestrator | Sunday 15 February 2026 06:59:11 +0000 (0:00:01.175) 1:05:49.937 ******* 2026-02-15 06:59:23.465393 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-02-15 06:59:23.465403 | orchestrator | 2026-02-15 06:59:23.465414 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:59:23.465424 | orchestrator | Sunday 15 February 2026 06:59:13 +0000 (0:00:02.034) 1:05:51.972 ******* 2026-02-15 06:59:23.465435 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:59:23.465445 | orchestrator | 2026-02-15 06:59:23.465456 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 06:59:23.465466 | orchestrator | Sunday 15 February 2026 06:59:15 +0000 (0:00:01.245) 1:05:53.217 ******* 2026-02-15 06:59:23.465477 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:23.465488 | orchestrator | 2026-02-15 06:59:23.465498 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 06:59:23.465509 | orchestrator | Sunday 15 February 2026 06:59:16 +0000 (0:00:01.144) 1:05:54.361 ******* 2026-02-15 06:59:23.465519 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:23.465530 | orchestrator | 2026-02-15 06:59:23.465540 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 06:59:23.465550 | orchestrator | Sunday 15 February 2026 06:59:17 +0000 (0:00:01.245) 1:05:55.607 ******* 2026-02-15 06:59:23.465561 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:23.465571 | orchestrator | 2026-02-15 06:59:23.465582 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 06:59:23.465593 | orchestrator | Sunday 15 February 2026 06:59:18 +0000 (0:00:01.228) 1:05:56.835 ******* 2026-02-15 06:59:23.465603 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:23.465614 | orchestrator | 2026-02-15 06:59:23.465624 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 06:59:23.465643 | orchestrator | Sunday 15 February 2026 06:59:19 +0000 (0:00:01.164) 1:05:57.999 ******* 2026-02-15 06:59:23.465654 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:59:23.465665 | orchestrator | 2026-02-15 06:59:23.465675 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 06:59:23.465686 | orchestrator | Sunday 15 February 2026 06:59:21 +0000 (0:00:01.233) 1:05:59.233 ******* 2026-02-15 06:59:23.465696 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:23.465707 | orchestrator | 2026-02-15 06:59:23.465717 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 06:59:23.465728 | orchestrator | Sunday 15 February 2026 06:59:22 +0000 (0:00:01.142) 1:06:00.376 ******* 2026-02-15 06:59:23.465738 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:59:23.465749 | orchestrator | 2026-02-15 06:59:23.465760 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 06:59:23.465778 | orchestrator | Sunday 15 February 2026 06:59:23 +0000 (0:00:01.174) 1:06:01.551 ******* 2026-02-15 06:59:26.026115 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:26.026215 | orchestrator | 2026-02-15 06:59:26.026230 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 06:59:26.026242 | orchestrator | Sunday 15 February 2026 06:59:24 +0000 (0:00:01.146) 1:06:02.697 ******* 2026-02-15 06:59:26.026253 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:59:26.026264 | orchestrator | 2026-02-15 06:59:26.026273 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 06:59:26.026283 | orchestrator | Sunday 15 February 2026 06:59:25 +0000 (0:00:01.172) 1:06:03.870 ******* 2026-02-15 06:59:26.026312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:59:26.026327 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'uuids': ['65aea23d-0c6f-484a-a24c-521c476a1576'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k']}})  2026-02-15 06:59:26.026342 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7cc59cd1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:59:26.026361 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800']}})  2026-02-15 06:59:26.026404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:59:26.026424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:59:26.026467 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 06:59:26.026485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:59:26.026496 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24', 'dm-uuid-CRYPT-LUKS2-d6fb5e45582d485d831faba7ab4bd3c7-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:59:26.026507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:59:26.026517 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'uuids': ['d6fb5e45-582d-485d-831f-aba7ab4bd3c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24']}})  2026-02-15 06:59:26.026528 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee']}})  2026-02-15 06:59:26.026544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:59:26.026573 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7713f0f4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 06:59:27.376192 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:59:27.376307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 06:59:27.376366 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k', 'dm-uuid-CRYPT-LUKS2-65aea23d0c6f484aa24c521c476a1576-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 06:59:27.376392 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:27.376408 | orchestrator | 2026-02-15 06:59:27.376420 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 06:59:27.376432 | orchestrator | Sunday 15 February 2026 06:59:27 +0000 (0:00:01.364) 1:06:05.234 ******* 2026-02-15 06:59:27.376444 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:27.376472 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee', 'dm-uuid-LVM-LPUKxkrBTeieOTZ6e0ZXciiasHMB50tPGji0opAuWaeNxMI7eUCwIYYUKkZDTL6k'], 'uuids': ['65aea23d-0c6f-484a-a24c-521c476a1576'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:27.376486 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7', 'scsi-SQEMU_QEMU_HARDDISK_7cc59cd1-b9bd-45a5-8870-6b105d7c74c7'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7cc59cd1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:27.376517 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-IvHEfu-ih0L-3H2z-po1B-1gCS-LEvi-5u5s1a', 'scsi-0QEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57', 'scsi-SQEMU_QEMU_HARDDISK_d479ce5c-4f98-42f4-9c6b-b762f9d34a57'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:27.376540 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:27.376552 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:27.376570 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-31-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:27.376582 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:27.376600 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24', 'dm-uuid-CRYPT-LUKS2-d6fb5e45582d485d831faba7ab4bd3c7-6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:32.874526 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:32.874664 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--85fe8ada--5694--5853--9626--8b4c90604800-osd--block--85fe8ada--5694--5853--9626--8b4c90604800', 'dm-uuid-LVM-qXECB59X2zDcgvlDYfuuiY5CkYuOSMNI6hUuq94THPzQl9Hrqp6SsXM7izwzJL24'], 'uuids': ['d6fb5e45-582d-485d-831f-aba7ab4bd3c7'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': 'd479ce5c', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['6hUuq9-4THP-zQl9-Hrqp-6SsX-M7iz-wzJL24']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:32.874687 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-U7TJPD-k0IK-gp6w-EmIR-HQpC-VWfX-SYsiH2', 'scsi-0QEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0', 'scsi-SQEMU_QEMU_HARDDISK_bfdd46b1-6e80-4940-b9c3-db3605a460a0'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'bfdd46b1', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--12f88160--c11a--5ad6--adc7--3b0cfe47daee-osd--block--12f88160--c11a--5ad6--adc7--3b0cfe47daee']}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:32.874727 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:32.874780 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '7713f0f4', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1', 'scsi-SQEMU_QEMU_HARDDISK_7713f0f4-7c56-4d74-9f60-9875e1b6d006-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:32.874807 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:32.874818 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:32.874900 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k', 'dm-uuid-CRYPT-LUKS2-65aea23d0c6f484aa24c521c476a1576-Gji0op-AuWa-eNxM-I7eU-CwIY-YUKk-ZDTL6k'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 06:59:32.874913 | orchestrator | skipping: [testbed-node-4] 2026-02-15 06:59:32.874925 | orchestrator | 2026-02-15 06:59:32.874936 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 06:59:32.874947 | orchestrator | Sunday 15 February 2026 06:59:28 +0000 (0:00:01.503) 1:06:06.738 ******* 2026-02-15 06:59:32.874957 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:59:32.874967 | orchestrator | 2026-02-15 06:59:32.874977 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 06:59:32.874986 | orchestrator | Sunday 15 February 2026 06:59:30 +0000 (0:00:01.566) 1:06:08.304 ******* 2026-02-15 06:59:32.875004 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:59:32.875013 | orchestrator | 2026-02-15 06:59:32.875023 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 06:59:32.875033 | orchestrator | Sunday 15 February 2026 06:59:31 +0000 (0:00:01.175) 1:06:09.480 ******* 2026-02-15 06:59:32.875042 | orchestrator | ok: [testbed-node-4] 2026-02-15 06:59:32.875051 | orchestrator | 2026-02-15 06:59:32.875062 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 06:59:32.875082 | orchestrator | Sunday 15 February 2026 06:59:32 +0000 (0:00:01.489) 1:06:10.969 ******* 2026-02-15 07:00:15.808483 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.808614 | orchestrator | 2026-02-15 07:00:15.808637 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 07:00:15.808653 | orchestrator | Sunday 15 February 2026 06:59:34 +0000 (0:00:01.171) 1:06:12.141 ******* 2026-02-15 07:00:15.808668 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.808681 | orchestrator | 2026-02-15 07:00:15.808696 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 07:00:15.808710 | orchestrator | Sunday 15 February 2026 06:59:35 +0000 (0:00:01.806) 1:06:13.947 ******* 2026-02-15 07:00:15.808723 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.808737 | orchestrator | 2026-02-15 07:00:15.808750 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 07:00:15.808763 | orchestrator | Sunday 15 February 2026 06:59:37 +0000 (0:00:01.159) 1:06:15.107 ******* 2026-02-15 07:00:15.808776 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-02-15 07:00:15.808789 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-02-15 07:00:15.808801 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-02-15 07:00:15.808813 | orchestrator | 2026-02-15 07:00:15.808826 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 07:00:15.808840 | orchestrator | Sunday 15 February 2026 06:59:38 +0000 (0:00:01.712) 1:06:16.819 ******* 2026-02-15 07:00:15.808853 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-02-15 07:00:15.808866 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-02-15 07:00:15.808951 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-02-15 07:00:15.808976 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.808989 | orchestrator | 2026-02-15 07:00:15.809003 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 07:00:15.809017 | orchestrator | Sunday 15 February 2026 06:59:39 +0000 (0:00:01.198) 1:06:18.018 ******* 2026-02-15 07:00:15.809031 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-4 2026-02-15 07:00:15.809047 | orchestrator | 2026-02-15 07:00:15.809062 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 07:00:15.809078 | orchestrator | Sunday 15 February 2026 06:59:41 +0000 (0:00:01.118) 1:06:19.136 ******* 2026-02-15 07:00:15.809092 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.809106 | orchestrator | 2026-02-15 07:00:15.809121 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 07:00:15.809135 | orchestrator | Sunday 15 February 2026 06:59:42 +0000 (0:00:01.115) 1:06:20.252 ******* 2026-02-15 07:00:15.809149 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.809162 | orchestrator | 2026-02-15 07:00:15.809176 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 07:00:15.809191 | orchestrator | Sunday 15 February 2026 06:59:43 +0000 (0:00:01.177) 1:06:21.429 ******* 2026-02-15 07:00:15.809205 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.809218 | orchestrator | 2026-02-15 07:00:15.809228 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 07:00:15.809237 | orchestrator | Sunday 15 February 2026 06:59:44 +0000 (0:00:01.156) 1:06:22.586 ******* 2026-02-15 07:00:15.809247 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:15.809280 | orchestrator | 2026-02-15 07:00:15.809289 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 07:00:15.809299 | orchestrator | Sunday 15 February 2026 06:59:45 +0000 (0:00:01.252) 1:06:23.839 ******* 2026-02-15 07:00:15.809308 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 07:00:15.809317 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 07:00:15.809327 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 07:00:15.809350 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.809360 | orchestrator | 2026-02-15 07:00:15.809369 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 07:00:15.809377 | orchestrator | Sunday 15 February 2026 06:59:47 +0000 (0:00:01.401) 1:06:25.240 ******* 2026-02-15 07:00:15.809384 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 07:00:15.809392 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 07:00:15.809400 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 07:00:15.809407 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.809415 | orchestrator | 2026-02-15 07:00:15.809423 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 07:00:15.809431 | orchestrator | Sunday 15 February 2026 06:59:48 +0000 (0:00:01.835) 1:06:27.076 ******* 2026-02-15 07:00:15.809439 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 07:00:15.809447 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 07:00:15.809454 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 07:00:15.809462 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.809470 | orchestrator | 2026-02-15 07:00:15.809477 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 07:00:15.809485 | orchestrator | Sunday 15 February 2026 06:59:50 +0000 (0:00:01.819) 1:06:28.895 ******* 2026-02-15 07:00:15.809492 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:15.809500 | orchestrator | 2026-02-15 07:00:15.809508 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 07:00:15.809516 | orchestrator | Sunday 15 February 2026 06:59:51 +0000 (0:00:01.206) 1:06:30.102 ******* 2026-02-15 07:00:15.809523 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-15 07:00:15.809531 | orchestrator | 2026-02-15 07:00:15.809539 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 07:00:15.809547 | orchestrator | Sunday 15 February 2026 06:59:53 +0000 (0:00:01.376) 1:06:31.479 ******* 2026-02-15 07:00:15.809572 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 07:00:15.809581 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 07:00:15.809589 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 07:00:15.809596 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 07:00:15.809604 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-15 07:00:15.809612 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 07:00:15.809620 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 07:00:15.809627 | orchestrator | 2026-02-15 07:00:15.809635 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 07:00:15.809642 | orchestrator | Sunday 15 February 2026 06:59:55 +0000 (0:00:01.867) 1:06:33.346 ******* 2026-02-15 07:00:15.809650 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 07:00:15.809658 | orchestrator | ok: [testbed-node-4 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 07:00:15.809665 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 07:00:15.809680 | orchestrator | ok: [testbed-node-4 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 07:00:15.809688 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-4) 2026-02-15 07:00:15.809696 | orchestrator | ok: [testbed-node-4 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-02-15 07:00:15.809703 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 07:00:15.809711 | orchestrator | 2026-02-15 07:00:15.809719 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-15 07:00:15.809726 | orchestrator | Sunday 15 February 2026 06:59:57 +0000 (0:00:02.244) 1:06:35.590 ******* 2026-02-15 07:00:15.809734 | orchestrator | changed: [testbed-node-4] 2026-02-15 07:00:15.809742 | orchestrator | 2026-02-15 07:00:15.809749 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-15 07:00:15.809757 | orchestrator | Sunday 15 February 2026 06:59:59 +0000 (0:00:02.007) 1:06:37.598 ******* 2026-02-15 07:00:15.809765 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 07:00:15.809773 | orchestrator | 2026-02-15 07:00:15.809781 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-15 07:00:15.809788 | orchestrator | Sunday 15 February 2026 07:00:02 +0000 (0:00:02.580) 1:06:40.178 ******* 2026-02-15 07:00:15.809796 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 07:00:15.809804 | orchestrator | 2026-02-15 07:00:15.809811 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 07:00:15.809819 | orchestrator | Sunday 15 February 2026 07:00:04 +0000 (0:00:01.932) 1:06:42.111 ******* 2026-02-15 07:00:15.809827 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-4 2026-02-15 07:00:15.809835 | orchestrator | 2026-02-15 07:00:15.809842 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 07:00:15.809850 | orchestrator | Sunday 15 February 2026 07:00:05 +0000 (0:00:01.183) 1:06:43.294 ******* 2026-02-15 07:00:15.809858 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-4 2026-02-15 07:00:15.809865 | orchestrator | 2026-02-15 07:00:15.809898 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 07:00:15.809907 | orchestrator | Sunday 15 February 2026 07:00:06 +0000 (0:00:01.132) 1:06:44.427 ******* 2026-02-15 07:00:15.809915 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.809923 | orchestrator | 2026-02-15 07:00:15.809931 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 07:00:15.809938 | orchestrator | Sunday 15 February 2026 07:00:07 +0000 (0:00:01.237) 1:06:45.664 ******* 2026-02-15 07:00:15.809946 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:15.809954 | orchestrator | 2026-02-15 07:00:15.809961 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 07:00:15.809969 | orchestrator | Sunday 15 February 2026 07:00:09 +0000 (0:00:01.574) 1:06:47.238 ******* 2026-02-15 07:00:15.809977 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:15.809985 | orchestrator | 2026-02-15 07:00:15.809992 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 07:00:15.810000 | orchestrator | Sunday 15 February 2026 07:00:10 +0000 (0:00:01.530) 1:06:48.769 ******* 2026-02-15 07:00:15.810008 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:15.810068 | orchestrator | 2026-02-15 07:00:15.810077 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 07:00:15.810085 | orchestrator | Sunday 15 February 2026 07:00:12 +0000 (0:00:01.523) 1:06:50.292 ******* 2026-02-15 07:00:15.810093 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.810101 | orchestrator | 2026-02-15 07:00:15.810108 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 07:00:15.810116 | orchestrator | Sunday 15 February 2026 07:00:13 +0000 (0:00:01.175) 1:06:51.467 ******* 2026-02-15 07:00:15.810130 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.810138 | orchestrator | 2026-02-15 07:00:15.810146 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 07:00:15.810154 | orchestrator | Sunday 15 February 2026 07:00:14 +0000 (0:00:01.138) 1:06:52.606 ******* 2026-02-15 07:00:15.810162 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:15.810170 | orchestrator | 2026-02-15 07:00:15.810178 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 07:00:15.810192 | orchestrator | Sunday 15 February 2026 07:00:15 +0000 (0:00:01.289) 1:06:53.896 ******* 2026-02-15 07:00:56.171787 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.171906 | orchestrator | 2026-02-15 07:00:56.171968 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 07:00:56.171982 | orchestrator | Sunday 15 February 2026 07:00:17 +0000 (0:00:01.532) 1:06:55.429 ******* 2026-02-15 07:00:56.171992 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.172003 | orchestrator | 2026-02-15 07:00:56.172014 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 07:00:56.172025 | orchestrator | Sunday 15 February 2026 07:00:18 +0000 (0:00:01.501) 1:06:56.931 ******* 2026-02-15 07:00:56.172035 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172047 | orchestrator | 2026-02-15 07:00:56.172058 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 07:00:56.172068 | orchestrator | Sunday 15 February 2026 07:00:19 +0000 (0:00:00.779) 1:06:57.710 ******* 2026-02-15 07:00:56.172078 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172089 | orchestrator | 2026-02-15 07:00:56.172098 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 07:00:56.172109 | orchestrator | Sunday 15 February 2026 07:00:20 +0000 (0:00:00.789) 1:06:58.500 ******* 2026-02-15 07:00:56.172121 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.172132 | orchestrator | 2026-02-15 07:00:56.172142 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 07:00:56.172153 | orchestrator | Sunday 15 February 2026 07:00:21 +0000 (0:00:00.812) 1:06:59.312 ******* 2026-02-15 07:00:56.172164 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.172174 | orchestrator | 2026-02-15 07:00:56.172184 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 07:00:56.172195 | orchestrator | Sunday 15 February 2026 07:00:21 +0000 (0:00:00.789) 1:07:00.102 ******* 2026-02-15 07:00:56.172205 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.172216 | orchestrator | 2026-02-15 07:00:56.172242 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 07:00:56.172263 | orchestrator | Sunday 15 February 2026 07:00:22 +0000 (0:00:00.847) 1:07:00.950 ******* 2026-02-15 07:00:56.172274 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172285 | orchestrator | 2026-02-15 07:00:56.172296 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 07:00:56.172307 | orchestrator | Sunday 15 February 2026 07:00:23 +0000 (0:00:00.960) 1:07:01.910 ******* 2026-02-15 07:00:56.172317 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172328 | orchestrator | 2026-02-15 07:00:56.172339 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 07:00:56.172349 | orchestrator | Sunday 15 February 2026 07:00:24 +0000 (0:00:00.800) 1:07:02.711 ******* 2026-02-15 07:00:56.172359 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172369 | orchestrator | 2026-02-15 07:00:56.172380 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 07:00:56.172391 | orchestrator | Sunday 15 February 2026 07:00:25 +0000 (0:00:00.815) 1:07:03.526 ******* 2026-02-15 07:00:56.172402 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.172411 | orchestrator | 2026-02-15 07:00:56.172421 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 07:00:56.172431 | orchestrator | Sunday 15 February 2026 07:00:26 +0000 (0:00:00.821) 1:07:04.347 ******* 2026-02-15 07:00:56.172472 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.172483 | orchestrator | 2026-02-15 07:00:56.172493 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 07:00:56.172504 | orchestrator | Sunday 15 February 2026 07:00:27 +0000 (0:00:00.806) 1:07:05.154 ******* 2026-02-15 07:00:56.172515 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172526 | orchestrator | 2026-02-15 07:00:56.172536 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 07:00:56.172563 | orchestrator | Sunday 15 February 2026 07:00:27 +0000 (0:00:00.815) 1:07:05.969 ******* 2026-02-15 07:00:56.172574 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172584 | orchestrator | 2026-02-15 07:00:56.172594 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 07:00:56.172606 | orchestrator | Sunday 15 February 2026 07:00:28 +0000 (0:00:00.812) 1:07:06.782 ******* 2026-02-15 07:00:56.172615 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172626 | orchestrator | 2026-02-15 07:00:56.172636 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 07:00:56.172647 | orchestrator | Sunday 15 February 2026 07:00:29 +0000 (0:00:00.824) 1:07:07.606 ******* 2026-02-15 07:00:56.172657 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172667 | orchestrator | 2026-02-15 07:00:56.172677 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 07:00:56.172688 | orchestrator | Sunday 15 February 2026 07:00:30 +0000 (0:00:00.758) 1:07:08.365 ******* 2026-02-15 07:00:56.172698 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172707 | orchestrator | 2026-02-15 07:00:56.172717 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 07:00:56.172728 | orchestrator | Sunday 15 February 2026 07:00:31 +0000 (0:00:00.795) 1:07:09.161 ******* 2026-02-15 07:00:56.172738 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172749 | orchestrator | 2026-02-15 07:00:56.172760 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 07:00:56.172771 | orchestrator | Sunday 15 February 2026 07:00:31 +0000 (0:00:00.757) 1:07:09.919 ******* 2026-02-15 07:00:56.172781 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172792 | orchestrator | 2026-02-15 07:00:56.172802 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 07:00:56.172814 | orchestrator | Sunday 15 February 2026 07:00:32 +0000 (0:00:00.762) 1:07:10.681 ******* 2026-02-15 07:00:56.172824 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172834 | orchestrator | 2026-02-15 07:00:56.172845 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 07:00:56.172855 | orchestrator | Sunday 15 February 2026 07:00:33 +0000 (0:00:00.862) 1:07:11.544 ******* 2026-02-15 07:00:56.172865 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172876 | orchestrator | 2026-02-15 07:00:56.172908 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 07:00:56.172944 | orchestrator | Sunday 15 February 2026 07:00:34 +0000 (0:00:00.771) 1:07:12.315 ******* 2026-02-15 07:00:56.172955 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.172964 | orchestrator | 2026-02-15 07:00:56.172974 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 07:00:56.172984 | orchestrator | Sunday 15 February 2026 07:00:34 +0000 (0:00:00.780) 1:07:13.096 ******* 2026-02-15 07:00:56.172994 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.173004 | orchestrator | 2026-02-15 07:00:56.173014 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 07:00:56.173024 | orchestrator | Sunday 15 February 2026 07:00:35 +0000 (0:00:00.852) 1:07:13.948 ******* 2026-02-15 07:00:56.173034 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.173043 | orchestrator | 2026-02-15 07:00:56.173053 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 07:00:56.173075 | orchestrator | Sunday 15 February 2026 07:00:36 +0000 (0:00:00.787) 1:07:14.736 ******* 2026-02-15 07:00:56.173084 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.173090 | orchestrator | 2026-02-15 07:00:56.173097 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 07:00:56.173103 | orchestrator | Sunday 15 February 2026 07:00:38 +0000 (0:00:01.601) 1:07:16.338 ******* 2026-02-15 07:00:56.173109 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.173115 | orchestrator | 2026-02-15 07:00:56.173121 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 07:00:56.173127 | orchestrator | Sunday 15 February 2026 07:00:40 +0000 (0:00:01.879) 1:07:18.218 ******* 2026-02-15 07:00:56.173133 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-4 2026-02-15 07:00:56.173141 | orchestrator | 2026-02-15 07:00:56.173147 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 07:00:56.173153 | orchestrator | Sunday 15 February 2026 07:00:41 +0000 (0:00:01.145) 1:07:19.364 ******* 2026-02-15 07:00:56.173159 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.173165 | orchestrator | 2026-02-15 07:00:56.173171 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 07:00:56.173178 | orchestrator | Sunday 15 February 2026 07:00:42 +0000 (0:00:01.129) 1:07:20.494 ******* 2026-02-15 07:00:56.173184 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.173190 | orchestrator | 2026-02-15 07:00:56.173196 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 07:00:56.173201 | orchestrator | Sunday 15 February 2026 07:00:43 +0000 (0:00:01.154) 1:07:21.649 ******* 2026-02-15 07:00:56.173206 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 07:00:56.173212 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 07:00:56.173217 | orchestrator | 2026-02-15 07:00:56.173223 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 07:00:56.173228 | orchestrator | Sunday 15 February 2026 07:00:45 +0000 (0:00:01.778) 1:07:23.428 ******* 2026-02-15 07:00:56.173233 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.173239 | orchestrator | 2026-02-15 07:00:56.173244 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 07:00:56.173250 | orchestrator | Sunday 15 February 2026 07:00:46 +0000 (0:00:01.517) 1:07:24.946 ******* 2026-02-15 07:00:56.173255 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.173260 | orchestrator | 2026-02-15 07:00:56.173266 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 07:00:56.173271 | orchestrator | Sunday 15 February 2026 07:00:48 +0000 (0:00:01.231) 1:07:26.177 ******* 2026-02-15 07:00:56.173276 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.173282 | orchestrator | 2026-02-15 07:00:56.173293 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 07:00:56.173299 | orchestrator | Sunday 15 February 2026 07:00:48 +0000 (0:00:00.915) 1:07:27.093 ******* 2026-02-15 07:00:56.173304 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.173309 | orchestrator | 2026-02-15 07:00:56.173315 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 07:00:56.173320 | orchestrator | Sunday 15 February 2026 07:00:49 +0000 (0:00:00.789) 1:07:27.882 ******* 2026-02-15 07:00:56.173326 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-4 2026-02-15 07:00:56.173331 | orchestrator | 2026-02-15 07:00:56.173336 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 07:00:56.173342 | orchestrator | Sunday 15 February 2026 07:00:50 +0000 (0:00:01.176) 1:07:29.059 ******* 2026-02-15 07:00:56.173347 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:00:56.173352 | orchestrator | 2026-02-15 07:00:56.173358 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 07:00:56.173367 | orchestrator | Sunday 15 February 2026 07:00:52 +0000 (0:00:01.717) 1:07:30.776 ******* 2026-02-15 07:00:56.173373 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 07:00:56.173378 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 07:00:56.173383 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 07:00:56.173389 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.173394 | orchestrator | 2026-02-15 07:00:56.173400 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 07:00:56.173409 | orchestrator | Sunday 15 February 2026 07:00:53 +0000 (0:00:01.144) 1:07:31.921 ******* 2026-02-15 07:00:56.173418 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.173427 | orchestrator | 2026-02-15 07:00:56.173435 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 07:00:56.173443 | orchestrator | Sunday 15 February 2026 07:00:54 +0000 (0:00:01.135) 1:07:33.056 ******* 2026-02-15 07:00:56.173452 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:00:56.173460 | orchestrator | 2026-02-15 07:00:56.173476 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 07:01:39.384177 | orchestrator | Sunday 15 February 2026 07:00:56 +0000 (0:00:01.204) 1:07:34.261 ******* 2026-02-15 07:01:39.384292 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.384310 | orchestrator | 2026-02-15 07:01:39.384323 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 07:01:39.384334 | orchestrator | Sunday 15 February 2026 07:00:57 +0000 (0:00:01.138) 1:07:35.400 ******* 2026-02-15 07:01:39.384349 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.384368 | orchestrator | 2026-02-15 07:01:39.384386 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 07:01:39.384406 | orchestrator | Sunday 15 February 2026 07:00:58 +0000 (0:00:01.173) 1:07:36.574 ******* 2026-02-15 07:01:39.384425 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.384444 | orchestrator | 2026-02-15 07:01:39.384462 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 07:01:39.384481 | orchestrator | Sunday 15 February 2026 07:00:59 +0000 (0:00:00.841) 1:07:37.415 ******* 2026-02-15 07:01:39.384500 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:01:39.384519 | orchestrator | 2026-02-15 07:01:39.384538 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 07:01:39.384559 | orchestrator | Sunday 15 February 2026 07:01:01 +0000 (0:00:02.114) 1:07:39.530 ******* 2026-02-15 07:01:39.384577 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:01:39.384595 | orchestrator | 2026-02-15 07:01:39.384607 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 07:01:39.384618 | orchestrator | Sunday 15 February 2026 07:01:02 +0000 (0:00:00.842) 1:07:40.373 ******* 2026-02-15 07:01:39.384630 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-4 2026-02-15 07:01:39.384641 | orchestrator | 2026-02-15 07:01:39.384651 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 07:01:39.384662 | orchestrator | Sunday 15 February 2026 07:01:03 +0000 (0:00:01.124) 1:07:41.498 ******* 2026-02-15 07:01:39.384673 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.384684 | orchestrator | 2026-02-15 07:01:39.384695 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 07:01:39.384706 | orchestrator | Sunday 15 February 2026 07:01:04 +0000 (0:00:01.185) 1:07:42.684 ******* 2026-02-15 07:01:39.384720 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.384733 | orchestrator | 2026-02-15 07:01:39.384745 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 07:01:39.384757 | orchestrator | Sunday 15 February 2026 07:01:05 +0000 (0:00:01.222) 1:07:43.906 ******* 2026-02-15 07:01:39.384770 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.384782 | orchestrator | 2026-02-15 07:01:39.384796 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 07:01:39.384836 | orchestrator | Sunday 15 February 2026 07:01:06 +0000 (0:00:01.170) 1:07:45.077 ******* 2026-02-15 07:01:39.384849 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.384861 | orchestrator | 2026-02-15 07:01:39.384874 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 07:01:39.384887 | orchestrator | Sunday 15 February 2026 07:01:08 +0000 (0:00:01.224) 1:07:46.301 ******* 2026-02-15 07:01:39.384900 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.384913 | orchestrator | 2026-02-15 07:01:39.384925 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 07:01:39.384939 | orchestrator | Sunday 15 February 2026 07:01:09 +0000 (0:00:01.135) 1:07:47.437 ******* 2026-02-15 07:01:39.384951 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.384996 | orchestrator | 2026-02-15 07:01:39.385009 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 07:01:39.385037 | orchestrator | Sunday 15 February 2026 07:01:10 +0000 (0:00:01.186) 1:07:48.623 ******* 2026-02-15 07:01:39.385050 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.385062 | orchestrator | 2026-02-15 07:01:39.385075 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 07:01:39.385086 | orchestrator | Sunday 15 February 2026 07:01:11 +0000 (0:00:01.196) 1:07:49.820 ******* 2026-02-15 07:01:39.385097 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.385108 | orchestrator | 2026-02-15 07:01:39.385119 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 07:01:39.385130 | orchestrator | Sunday 15 February 2026 07:01:12 +0000 (0:00:01.126) 1:07:50.946 ******* 2026-02-15 07:01:39.385140 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:01:39.385151 | orchestrator | 2026-02-15 07:01:39.385162 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 07:01:39.385173 | orchestrator | Sunday 15 February 2026 07:01:13 +0000 (0:00:00.814) 1:07:51.761 ******* 2026-02-15 07:01:39.385184 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-4 2026-02-15 07:01:39.385196 | orchestrator | 2026-02-15 07:01:39.385207 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 07:01:39.385217 | orchestrator | Sunday 15 February 2026 07:01:14 +0000 (0:00:01.319) 1:07:53.080 ******* 2026-02-15 07:01:39.385228 | orchestrator | ok: [testbed-node-4] => (item=/etc/ceph) 2026-02-15 07:01:39.385239 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/) 2026-02-15 07:01:39.385250 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-02-15 07:01:39.385261 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-02-15 07:01:39.385272 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-02-15 07:01:39.385282 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-02-15 07:01:39.385293 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-02-15 07:01:39.385303 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-02-15 07:01:39.385314 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 07:01:39.385325 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 07:01:39.385336 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 07:01:39.385366 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 07:01:39.385378 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 07:01:39.385389 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 07:01:39.385400 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2026-02-15 07:01:39.385411 | orchestrator | ok: [testbed-node-4] => (item=/var/log/ceph) 2026-02-15 07:01:39.385421 | orchestrator | 2026-02-15 07:01:39.385432 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 07:01:39.385443 | orchestrator | Sunday 15 February 2026 07:01:21 +0000 (0:00:06.117) 1:07:59.198 ******* 2026-02-15 07:01:39.385460 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-4 2026-02-15 07:01:39.385471 | orchestrator | 2026-02-15 07:01:39.385482 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-15 07:01:39.385492 | orchestrator | Sunday 15 February 2026 07:01:22 +0000 (0:00:01.132) 1:08:00.330 ******* 2026-02-15 07:01:39.385503 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 07:01:39.385516 | orchestrator | 2026-02-15 07:01:39.385527 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-15 07:01:39.385538 | orchestrator | Sunday 15 February 2026 07:01:23 +0000 (0:00:01.565) 1:08:01.896 ******* 2026-02-15 07:01:39.385549 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 07:01:39.385560 | orchestrator | 2026-02-15 07:01:39.385570 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 07:01:39.385581 | orchestrator | Sunday 15 February 2026 07:01:25 +0000 (0:00:01.668) 1:08:03.564 ******* 2026-02-15 07:01:39.385592 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.385603 | orchestrator | 2026-02-15 07:01:39.385614 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 07:01:39.385625 | orchestrator | Sunday 15 February 2026 07:01:26 +0000 (0:00:00.775) 1:08:04.340 ******* 2026-02-15 07:01:39.385635 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.385646 | orchestrator | 2026-02-15 07:01:39.385657 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 07:01:39.385668 | orchestrator | Sunday 15 February 2026 07:01:27 +0000 (0:00:00.805) 1:08:05.146 ******* 2026-02-15 07:01:39.385679 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.385690 | orchestrator | 2026-02-15 07:01:39.385700 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 07:01:39.385711 | orchestrator | Sunday 15 February 2026 07:01:27 +0000 (0:00:00.785) 1:08:05.932 ******* 2026-02-15 07:01:39.385722 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.385733 | orchestrator | 2026-02-15 07:01:39.385744 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 07:01:39.385754 | orchestrator | Sunday 15 February 2026 07:01:28 +0000 (0:00:00.864) 1:08:06.797 ******* 2026-02-15 07:01:39.385765 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.385776 | orchestrator | 2026-02-15 07:01:39.385788 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 07:01:39.385806 | orchestrator | Sunday 15 February 2026 07:01:29 +0000 (0:00:00.796) 1:08:07.593 ******* 2026-02-15 07:01:39.385822 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.385839 | orchestrator | 2026-02-15 07:01:39.385856 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 07:01:39.385881 | orchestrator | Sunday 15 February 2026 07:01:30 +0000 (0:00:00.820) 1:08:08.413 ******* 2026-02-15 07:01:39.385899 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.385918 | orchestrator | 2026-02-15 07:01:39.385937 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 07:01:39.385990 | orchestrator | Sunday 15 February 2026 07:01:31 +0000 (0:00:00.884) 1:08:09.298 ******* 2026-02-15 07:01:39.386013 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.386119 | orchestrator | 2026-02-15 07:01:39.386140 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 07:01:39.386159 | orchestrator | Sunday 15 February 2026 07:01:31 +0000 (0:00:00.777) 1:08:10.076 ******* 2026-02-15 07:01:39.386177 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.386195 | orchestrator | 2026-02-15 07:01:39.386213 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 07:01:39.386245 | orchestrator | Sunday 15 February 2026 07:01:32 +0000 (0:00:00.765) 1:08:10.841 ******* 2026-02-15 07:01:39.386263 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.386281 | orchestrator | 2026-02-15 07:01:39.386299 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 07:01:39.386317 | orchestrator | Sunday 15 February 2026 07:01:33 +0000 (0:00:00.803) 1:08:11.644 ******* 2026-02-15 07:01:39.386335 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:01:39.386354 | orchestrator | 2026-02-15 07:01:39.386373 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 07:01:39.386392 | orchestrator | Sunday 15 February 2026 07:01:34 +0000 (0:00:00.795) 1:08:12.440 ******* 2026-02-15 07:01:39.386411 | orchestrator | changed: [testbed-node-4 -> testbed-node-2(192.168.16.12)] 2026-02-15 07:01:39.386429 | orchestrator | 2026-02-15 07:01:39.386449 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 07:01:39.386468 | orchestrator | Sunday 15 February 2026 07:01:38 +0000 (0:00:04.182) 1:08:16.623 ******* 2026-02-15 07:01:39.386486 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 07:01:39.386505 | orchestrator | 2026-02-15 07:01:39.386541 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 07:02:20.613873 | orchestrator | Sunday 15 February 2026 07:01:39 +0000 (0:00:00.850) 1:08:17.473 ******* 2026-02-15 07:02:20.614096 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}]) 2026-02-15 07:02:20.614121 | orchestrator | ok: [testbed-node-4 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}]) 2026-02-15 07:02:20.614132 | orchestrator | 2026-02-15 07:02:20.614142 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 07:02:20.614152 | orchestrator | Sunday 15 February 2026 07:01:43 +0000 (0:00:04.443) 1:08:21.917 ******* 2026-02-15 07:02:20.614170 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:02:20.614180 | orchestrator | 2026-02-15 07:02:20.614189 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 07:02:20.614198 | orchestrator | Sunday 15 February 2026 07:01:44 +0000 (0:00:00.786) 1:08:22.703 ******* 2026-02-15 07:02:20.614206 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:02:20.614215 | orchestrator | 2026-02-15 07:02:20.614224 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 07:02:20.614234 | orchestrator | Sunday 15 February 2026 07:01:45 +0000 (0:00:00.818) 1:08:23.522 ******* 2026-02-15 07:02:20.614243 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:02:20.614253 | orchestrator | 2026-02-15 07:02:20.614268 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 07:02:20.614294 | orchestrator | Sunday 15 February 2026 07:01:46 +0000 (0:00:00.814) 1:08:24.337 ******* 2026-02-15 07:02:20.614308 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:02:20.614322 | orchestrator | 2026-02-15 07:02:20.614336 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 07:02:20.614349 | orchestrator | Sunday 15 February 2026 07:01:47 +0000 (0:00:00.825) 1:08:25.163 ******* 2026-02-15 07:02:20.614363 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:02:20.614378 | orchestrator | 2026-02-15 07:02:20.614392 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 07:02:20.614407 | orchestrator | Sunday 15 February 2026 07:01:47 +0000 (0:00:00.848) 1:08:26.011 ******* 2026-02-15 07:02:20.614449 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:02:20.614466 | orchestrator | 2026-02-15 07:02:20.614482 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 07:02:20.614498 | orchestrator | Sunday 15 February 2026 07:01:48 +0000 (0:00:00.981) 1:08:26.993 ******* 2026-02-15 07:02:20.614513 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 07:02:20.614530 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 07:02:20.614544 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 07:02:20.614558 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:02:20.614573 | orchestrator | 2026-02-15 07:02:20.614587 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 07:02:20.614602 | orchestrator | Sunday 15 February 2026 07:01:49 +0000 (0:00:01.067) 1:08:28.061 ******* 2026-02-15 07:02:20.614617 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 07:02:20.614633 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 07:02:20.614648 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 07:02:20.614663 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:02:20.614679 | orchestrator | 2026-02-15 07:02:20.614694 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 07:02:20.614709 | orchestrator | Sunday 15 February 2026 07:01:51 +0000 (0:00:01.083) 1:08:29.145 ******* 2026-02-15 07:02:20.614723 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-02-15 07:02:20.614738 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-02-15 07:02:20.614754 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-02-15 07:02:20.614768 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:02:20.614783 | orchestrator | 2026-02-15 07:02:20.614798 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 07:02:20.614814 | orchestrator | Sunday 15 February 2026 07:01:52 +0000 (0:00:01.118) 1:08:30.263 ******* 2026-02-15 07:02:20.614829 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:02:20.614843 | orchestrator | 2026-02-15 07:02:20.614858 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 07:02:20.614873 | orchestrator | Sunday 15 February 2026 07:01:53 +0000 (0:00:00.884) 1:08:31.147 ******* 2026-02-15 07:02:20.614888 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-02-15 07:02:20.614903 | orchestrator | 2026-02-15 07:02:20.614917 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 07:02:20.614933 | orchestrator | Sunday 15 February 2026 07:01:54 +0000 (0:00:01.062) 1:08:32.209 ******* 2026-02-15 07:02:20.614948 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:02:20.614963 | orchestrator | 2026-02-15 07:02:20.614977 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-15 07:02:20.615014 | orchestrator | Sunday 15 February 2026 07:01:55 +0000 (0:00:01.481) 1:08:33.691 ******* 2026-02-15 07:02:20.615032 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-4 2026-02-15 07:02:20.615047 | orchestrator | 2026-02-15 07:02:20.615085 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-15 07:02:20.615101 | orchestrator | Sunday 15 February 2026 07:01:56 +0000 (0:00:01.130) 1:08:34.822 ******* 2026-02-15 07:02:20.615116 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 07:02:20.615130 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-15 07:02:20.615145 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 07:02:20.615159 | orchestrator | 2026-02-15 07:02:20.615173 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-15 07:02:20.615189 | orchestrator | Sunday 15 February 2026 07:01:59 +0000 (0:00:03.236) 1:08:38.059 ******* 2026-02-15 07:02:20.615204 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-15 07:02:20.615232 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-02-15 07:02:20.615248 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:02:20.615261 | orchestrator | 2026-02-15 07:02:20.615335 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-15 07:02:20.615354 | orchestrator | Sunday 15 February 2026 07:02:01 +0000 (0:00:01.988) 1:08:40.047 ******* 2026-02-15 07:02:20.615368 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:02:20.615382 | orchestrator | 2026-02-15 07:02:20.615398 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-15 07:02:20.615413 | orchestrator | Sunday 15 February 2026 07:02:02 +0000 (0:00:00.822) 1:08:40.870 ******* 2026-02-15 07:02:20.615427 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-4 2026-02-15 07:02:20.615444 | orchestrator | 2026-02-15 07:02:20.615458 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-15 07:02:20.615472 | orchestrator | Sunday 15 February 2026 07:02:04 +0000 (0:00:01.407) 1:08:42.278 ******* 2026-02-15 07:02:20.615488 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 07:02:20.615505 | orchestrator | 2026-02-15 07:02:20.615517 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-15 07:02:20.615526 | orchestrator | Sunday 15 February 2026 07:02:05 +0000 (0:00:01.655) 1:08:43.933 ******* 2026-02-15 07:02:20.615535 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 07:02:20.615544 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-15 07:02:20.615553 | orchestrator | 2026-02-15 07:02:20.615562 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-15 07:02:20.615570 | orchestrator | Sunday 15 February 2026 07:02:11 +0000 (0:00:05.216) 1:08:49.150 ******* 2026-02-15 07:02:20.615579 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 07:02:20.615587 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 07:02:20.615595 | orchestrator | 2026-02-15 07:02:20.615604 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-15 07:02:20.615612 | orchestrator | Sunday 15 February 2026 07:02:14 +0000 (0:00:03.094) 1:08:52.245 ******* 2026-02-15 07:02:20.615621 | orchestrator | ok: [testbed-node-4] => (item=None) 2026-02-15 07:02:20.615630 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:02:20.615638 | orchestrator | 2026-02-15 07:02:20.615646 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-15 07:02:20.615655 | orchestrator | Sunday 15 February 2026 07:02:15 +0000 (0:00:01.628) 1:08:53.874 ******* 2026-02-15 07:02:20.615669 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-4 2026-02-15 07:02:20.615678 | orchestrator | 2026-02-15 07:02:20.615687 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-15 07:02:20.615695 | orchestrator | Sunday 15 February 2026 07:02:16 +0000 (0:00:01.187) 1:08:55.062 ******* 2026-02-15 07:02:20.615704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:02:20.615713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:02:20.615721 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:02:20.615730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:02:20.615739 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:02:20.615757 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:02:20.615766 | orchestrator | 2026-02-15 07:02:20.615774 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-15 07:02:20.615783 | orchestrator | Sunday 15 February 2026 07:02:18 +0000 (0:00:01.650) 1:08:56.713 ******* 2026-02-15 07:02:20.615791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:02:20.615800 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:02:20.615809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:02:20.615828 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:03:27.262166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:03:27.262288 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:03:27.262306 | orchestrator | 2026-02-15 07:03:27.262320 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-15 07:03:27.262333 | orchestrator | Sunday 15 February 2026 07:02:20 +0000 (0:00:01.985) 1:08:58.698 ******* 2026-02-15 07:03:27.262344 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 07:03:27.262357 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 07:03:27.262369 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 07:03:27.262380 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 07:03:27.262393 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 07:03:27.262404 | orchestrator | 2026-02-15 07:03:27.262415 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-15 07:03:27.262426 | orchestrator | Sunday 15 February 2026 07:02:51 +0000 (0:00:30.947) 1:09:29.645 ******* 2026-02-15 07:03:27.262437 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:03:27.262448 | orchestrator | 2026-02-15 07:03:27.262459 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-15 07:03:27.262470 | orchestrator | Sunday 15 February 2026 07:02:52 +0000 (0:00:00.808) 1:09:30.454 ******* 2026-02-15 07:03:27.262481 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:03:27.262492 | orchestrator | 2026-02-15 07:03:27.262503 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-15 07:03:27.262514 | orchestrator | Sunday 15 February 2026 07:02:53 +0000 (0:00:00.791) 1:09:31.245 ******* 2026-02-15 07:03:27.262524 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-4 2026-02-15 07:03:27.262536 | orchestrator | 2026-02-15 07:03:27.262547 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-15 07:03:27.262558 | orchestrator | Sunday 15 February 2026 07:02:54 +0000 (0:00:01.262) 1:09:32.508 ******* 2026-02-15 07:03:27.262568 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-4 2026-02-15 07:03:27.262579 | orchestrator | 2026-02-15 07:03:27.262590 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-15 07:03:27.262601 | orchestrator | Sunday 15 February 2026 07:02:55 +0000 (0:00:01.221) 1:09:33.729 ******* 2026-02-15 07:03:27.262613 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:03:27.262653 | orchestrator | 2026-02-15 07:03:27.262667 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-15 07:03:27.262680 | orchestrator | Sunday 15 February 2026 07:02:57 +0000 (0:00:02.022) 1:09:35.752 ******* 2026-02-15 07:03:27.262707 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:03:27.262721 | orchestrator | 2026-02-15 07:03:27.262733 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-15 07:03:27.262747 | orchestrator | Sunday 15 February 2026 07:02:59 +0000 (0:00:02.059) 1:09:37.812 ******* 2026-02-15 07:03:27.262761 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:03:27.262774 | orchestrator | 2026-02-15 07:03:27.262786 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-15 07:03:27.262799 | orchestrator | Sunday 15 February 2026 07:03:01 +0000 (0:00:02.203) 1:09:40.016 ******* 2026-02-15 07:03:27.262812 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-02-15 07:03:27.262826 | orchestrator | 2026-02-15 07:03:27.262837 | orchestrator | PLAY [Upgrade ceph rgws cluster] *********************************************** 2026-02-15 07:03:27.262847 | orchestrator | 2026-02-15 07:03:27.262858 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 07:03:27.262869 | orchestrator | Sunday 15 February 2026 07:03:05 +0000 (0:00:03.133) 1:09:43.149 ******* 2026-02-15 07:03:27.262880 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-5 2026-02-15 07:03:27.262890 | orchestrator | 2026-02-15 07:03:27.262901 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-02-15 07:03:27.262912 | orchestrator | Sunday 15 February 2026 07:03:06 +0000 (0:00:01.135) 1:09:44.285 ******* 2026-02-15 07:03:27.262922 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:27.262933 | orchestrator | 2026-02-15 07:03:27.262944 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-02-15 07:03:27.262955 | orchestrator | Sunday 15 February 2026 07:03:07 +0000 (0:00:01.519) 1:09:45.804 ******* 2026-02-15 07:03:27.262965 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:27.262976 | orchestrator | 2026-02-15 07:03:27.262987 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 07:03:27.262998 | orchestrator | Sunday 15 February 2026 07:03:08 +0000 (0:00:01.209) 1:09:47.014 ******* 2026-02-15 07:03:27.263009 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:27.263019 | orchestrator | 2026-02-15 07:03:27.263030 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 07:03:27.263041 | orchestrator | Sunday 15 February 2026 07:03:10 +0000 (0:00:01.452) 1:09:48.467 ******* 2026-02-15 07:03:27.263103 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:27.263116 | orchestrator | 2026-02-15 07:03:27.263145 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-02-15 07:03:27.263157 | orchestrator | Sunday 15 February 2026 07:03:11 +0000 (0:00:01.183) 1:09:49.651 ******* 2026-02-15 07:03:27.263168 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:27.263178 | orchestrator | 2026-02-15 07:03:27.263189 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-02-15 07:03:27.263200 | orchestrator | Sunday 15 February 2026 07:03:12 +0000 (0:00:01.139) 1:09:50.790 ******* 2026-02-15 07:03:27.263211 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:27.263222 | orchestrator | 2026-02-15 07:03:27.263233 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-02-15 07:03:27.263245 | orchestrator | Sunday 15 February 2026 07:03:13 +0000 (0:00:01.170) 1:09:51.960 ******* 2026-02-15 07:03:27.263256 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:27.263267 | orchestrator | 2026-02-15 07:03:27.263278 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-02-15 07:03:27.263289 | orchestrator | Sunday 15 February 2026 07:03:15 +0000 (0:00:01.177) 1:09:53.137 ******* 2026-02-15 07:03:27.263300 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:27.263311 | orchestrator | 2026-02-15 07:03:27.263330 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-02-15 07:03:27.263341 | orchestrator | Sunday 15 February 2026 07:03:16 +0000 (0:00:01.164) 1:09:54.302 ******* 2026-02-15 07:03:27.263352 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 07:03:27.263363 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 07:03:27.263374 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 07:03:27.263385 | orchestrator | 2026-02-15 07:03:27.263396 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-02-15 07:03:27.263407 | orchestrator | Sunday 15 February 2026 07:03:17 +0000 (0:00:01.732) 1:09:56.034 ******* 2026-02-15 07:03:27.263418 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:27.263429 | orchestrator | 2026-02-15 07:03:27.263440 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-02-15 07:03:27.263451 | orchestrator | Sunday 15 February 2026 07:03:19 +0000 (0:00:01.292) 1:09:57.327 ******* 2026-02-15 07:03:27.263462 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 07:03:27.263473 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 07:03:27.263484 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 07:03:27.263495 | orchestrator | 2026-02-15 07:03:27.263506 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-02-15 07:03:27.263517 | orchestrator | Sunday 15 February 2026 07:03:22 +0000 (0:00:03.294) 1:10:00.621 ******* 2026-02-15 07:03:27.263528 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-15 07:03:27.263552 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-15 07:03:27.263564 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-15 07:03:27.263586 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:27.263598 | orchestrator | 2026-02-15 07:03:27.263609 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-02-15 07:03:27.263620 | orchestrator | Sunday 15 February 2026 07:03:24 +0000 (0:00:01.506) 1:10:02.128 ******* 2026-02-15 07:03:27.263638 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-02-15 07:03:27.263653 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-02-15 07:03:27.263665 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-02-15 07:03:27.263677 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:27.263688 | orchestrator | 2026-02-15 07:03:27.263699 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-02-15 07:03:27.263710 | orchestrator | Sunday 15 February 2026 07:03:26 +0000 (0:00:02.042) 1:10:04.170 ******* 2026-02-15 07:03:27.263723 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:27.263745 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:46.340164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:46.340264 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:46.340278 | orchestrator | 2026-02-15 07:03:46.340288 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-02-15 07:03:46.340298 | orchestrator | Sunday 15 February 2026 07:03:27 +0000 (0:00:01.181) 1:10:05.352 ******* 2026-02-15 07:03:46.340310 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'cf71ab2d386c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-02-15 07:03:19.782486', 'end': '2026-02-15 07:03:19.835769', 'delta': '0:00:00.053283', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['cf71ab2d386c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-02-15 07:03:46.340321 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': '6de6ee21b104', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-02-15 07:03:20.372183', 'end': '2026-02-15 07:03:20.419350', 'delta': '0:00:00.047167', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['6de6ee21b104'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-02-15 07:03:46.340345 | orchestrator | ok: [testbed-node-5] => (item={'changed': False, 'stdout': 'bf842a45b4ed', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-02-15 07:03:21.306229', 'end': '2026-02-15 07:03:21.350714', 'delta': '0:00:00.044485', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bf842a45b4ed'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-02-15 07:03:46.340355 | orchestrator | 2026-02-15 07:03:46.340364 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-02-15 07:03:46.340373 | orchestrator | Sunday 15 February 2026 07:03:28 +0000 (0:00:01.263) 1:10:06.616 ******* 2026-02-15 07:03:46.340382 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:46.340392 | orchestrator | 2026-02-15 07:03:46.340401 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-02-15 07:03:46.340410 | orchestrator | Sunday 15 February 2026 07:03:29 +0000 (0:00:01.334) 1:10:07.950 ******* 2026-02-15 07:03:46.340418 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:46.340427 | orchestrator | 2026-02-15 07:03:46.340456 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-02-15 07:03:46.340465 | orchestrator | Sunday 15 February 2026 07:03:31 +0000 (0:00:01.309) 1:10:09.261 ******* 2026-02-15 07:03:46.340474 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:46.340482 | orchestrator | 2026-02-15 07:03:46.340491 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-02-15 07:03:46.340500 | orchestrator | Sunday 15 February 2026 07:03:32 +0000 (0:00:01.140) 1:10:10.401 ******* 2026-02-15 07:03:46.340509 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-02-15 07:03:46.340517 | orchestrator | 2026-02-15 07:03:46.340526 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 07:03:46.340534 | orchestrator | Sunday 15 February 2026 07:03:34 +0000 (0:00:02.018) 1:10:12.420 ******* 2026-02-15 07:03:46.340543 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:46.340552 | orchestrator | 2026-02-15 07:03:46.340560 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-02-15 07:03:46.340569 | orchestrator | Sunday 15 February 2026 07:03:35 +0000 (0:00:01.158) 1:10:13.579 ******* 2026-02-15 07:03:46.340591 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:46.340600 | orchestrator | 2026-02-15 07:03:46.340609 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-02-15 07:03:46.340618 | orchestrator | Sunday 15 February 2026 07:03:36 +0000 (0:00:01.141) 1:10:14.721 ******* 2026-02-15 07:03:46.340626 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:46.340635 | orchestrator | 2026-02-15 07:03:46.340644 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-02-15 07:03:46.340653 | orchestrator | Sunday 15 February 2026 07:03:37 +0000 (0:00:01.266) 1:10:15.987 ******* 2026-02-15 07:03:46.340661 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:46.340671 | orchestrator | 2026-02-15 07:03:46.340682 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-02-15 07:03:46.340692 | orchestrator | Sunday 15 February 2026 07:03:39 +0000 (0:00:01.131) 1:10:17.118 ******* 2026-02-15 07:03:46.340702 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:46.340712 | orchestrator | 2026-02-15 07:03:46.340723 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-02-15 07:03:46.340733 | orchestrator | Sunday 15 February 2026 07:03:40 +0000 (0:00:01.103) 1:10:18.222 ******* 2026-02-15 07:03:46.340741 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:46.340750 | orchestrator | 2026-02-15 07:03:46.340758 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-02-15 07:03:46.340767 | orchestrator | Sunday 15 February 2026 07:03:41 +0000 (0:00:01.151) 1:10:19.373 ******* 2026-02-15 07:03:46.340776 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:46.340784 | orchestrator | 2026-02-15 07:03:46.340793 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-02-15 07:03:46.340802 | orchestrator | Sunday 15 February 2026 07:03:42 +0000 (0:00:01.150) 1:10:20.524 ******* 2026-02-15 07:03:46.340810 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:46.340819 | orchestrator | 2026-02-15 07:03:46.340827 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-02-15 07:03:46.340836 | orchestrator | Sunday 15 February 2026 07:03:43 +0000 (0:00:01.237) 1:10:21.761 ******* 2026-02-15 07:03:46.340845 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:46.340853 | orchestrator | 2026-02-15 07:03:46.340862 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-02-15 07:03:46.340871 | orchestrator | Sunday 15 February 2026 07:03:44 +0000 (0:00:01.231) 1:10:22.993 ******* 2026-02-15 07:03:46.340880 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:03:46.340889 | orchestrator | 2026-02-15 07:03:46.340897 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-02-15 07:03:46.340906 | orchestrator | Sunday 15 February 2026 07:03:46 +0000 (0:00:01.224) 1:10:24.217 ******* 2026-02-15 07:03:46.340915 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 07:03:46.340936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'uuids': ['dcdf938a-1e00-4f8c-ba32-16bd01cbd7b7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z']}})  2026-02-15 07:03:46.340948 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ca6afbc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 07:03:46.340965 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9']}})  2026-02-15 07:03:47.492907 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 07:03:47.493025 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 07:03:47.493044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}})  2026-02-15 07:03:47.493114 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 07:03:47.493144 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP', 'dm-uuid-CRYPT-LUKS2-ddc473233b6d4a8581ea0c389df91130-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 07:03:47.493157 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 07:03:47.493170 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'uuids': ['ddc47323-3b6d-4a85-81ea-0c389df91130'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP']}})  2026-02-15 07:03:47.493203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978']}})  2026-02-15 07:03:47.493217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 07:03:47.493239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3b30427', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}})  2026-02-15 07:03:47.493262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 07:03:47.493274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}})  2026-02-15 07:03:47.493293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z', 'dm-uuid-CRYPT-LUKS2-dcdf938a1e004f8cba3216bd01cbd7b7-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}})  2026-02-15 07:03:47.715653 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:03:47.715751 | orchestrator | 2026-02-15 07:03:47.715766 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-02-15 07:03:47.715779 | orchestrator | Sunday 15 February 2026 07:03:47 +0000 (0:00:01.371) 1:10:25.588 ******* 2026-02-15 07:03:47.715793 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:47.715832 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978', 'dm-uuid-LVM-yn0X3YpOdmN7a2Vy51A3McBRTeRmlyi5spWxSZ24uYRMSOuc8ef4XbsQux3ozB1z'], 'uuids': ['dcdf938a-1e00-4f8c-ba32-16bd01cbd7b7'], 'labels': [], 'masters': ['dm-3']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z']}}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:47.715860 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f', 'scsi-SQEMU_QEMU_HARDDISK_1ca6afbc-10a2-4ec5-8c49-662ac545d94f'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '1ca6afbc', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:47.715873 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-0NSc3P-92oS-VJoi-pTqY-IHhw-jE6F-36M4cw', 'scsi-0QEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2', 'scsi-SQEMU_QEMU_HARDDISK_4783efc4-2c45-47ca-9463-c51e8fa27ad2'], 'uuids': [], 'labels': [], 'masters': ['dm-0']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9']}}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:47.715905 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:47.715918 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:47.715938 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'virtual': 1, 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'uuids': ['2026-02-15-02-28-37-00'], 'labels': ['config-2'], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU DVD-ROM', 'sas_address': None, 'sas_device_handle': None, 'removable': '1', 'support_discard': '0', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'mq-deadline', 'sectors': '1012', 'sectorsize': '2048', 'size': '506.00 KB', 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:47.715955 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:47.715967 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-2', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP', 'dm-uuid-CRYPT-LUKS2-ddc473233b6d4a8581ea0c389df91130-1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:47.715979 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:03:47.715998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-ceph--37190823--1b54--548e--8f85--c0a5c63b57f9-osd--block--37190823--1b54--548e--8f85--c0a5c63b57f9', 'dm-uuid-LVM-sA76iEv6wbKl5uvO5WIAJ33Mi7zP3Zom1g10zUGG5pmKwNOfX8zfnz1GpJLpaqwP'], 'uuids': ['ddc47323-3b6d-4a85-81ea-0c389df91130'], 'labels': [], 'masters': ['dm-2']}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'serial': '4783efc4', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41934848', 'sectorsize': '512', 'size': '20.00 GB', 'host': '', 'holders': ['1g10zU-GG5p-mKwN-OfX8-zfnz-1GpJ-LpaqwP']}}, 'ansible_loop_var': 'item'})  2026-02-15 07:04:01.446001 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'virtual': 1, 'links': {'ids': ['lvm-pv-uuid-rTocOK-8ZAt-aEx2-0Kiz-DsoA-cxgu-jbk1AV', 'scsi-0QEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58', 'scsi-SQEMU_QEMU_HARDDISK_3b876a0f-d488-4022-9acb-dce2cb7c3b58'], 'uuids': [], 'labels': [], 'masters': ['dm-1']}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': '3b876a0f', 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '41943040', 'sectorsize': '512', 'size': '20.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': ['ceph--fe68aa92--7c5f--5213--9184--27150181e978-osd--block--fe68aa92--7c5f--5213--9184--27150181e978']}}, 'ansible_loop_var': 'item'})  2026-02-15 07:04:01.446239 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:04:01.446277 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'virtual': 1, 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': 'QEMU', 'model': 'QEMU HARDDISK', 'sas_address': None, 'sas_device_handle': None, 'serial': 'e3b30427', 'removable': '0', 'support_discard': '4096', 'partitions': {'sda16': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part16'], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec'], 'labels': ['BOOT'], 'masters': []}, 'start': '227328', 'sectors': '1869825', 'sectorsize': 512, 'size': '913.00 MB', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec', 'holders': []}, 'sda14': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part14'], 'uuids': [], 'labels': [], 'masters': []}, 'start': '2048', 'sectors': '8192', 'sectorsize': 512, 'size': '4.00 MB', 'uuid': None, 'holders': []}, 'sda15': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part15'], 'uuids': ['5C78-612A'], 'labels': ['UEFI'], 'masters': []}, 'start': '10240', 'sectors': '217088', 'sectorsize': 512, 'size': '106.00 MB', 'uuid': '5C78-612A', 'holders': []}, 'sda1': {'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1', 'scsi-SQEMU_QEMU_HARDDISK_e3b30427-1d1a-4e05-b8dc-b7a9ac3a8dbd-part1'], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf'], 'labels': ['cloudimg-rootfs'], 'masters': []}, 'start': '2099200', 'sectors': '165672927', 'sectorsize': 512, 'size': '79.00 GB', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf', 'holders': []}}, 'rotational': '1', 'scheduler_mode': 'none', 'sectors': '167772160', 'sectorsize': '512', 'size': '80.00 GB', 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:04:01.446311 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:04:01.446336 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'virtual': 1, 'links': {'ids': [], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '0', 'partitions': {}, 'rotational': '0', 'scheduler_mode': 'none', 'sectors': '0', 'sectorsize': '512', 'size': '0.00 Bytes', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:04:01.446354 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-3', 'value': {'virtual': 1, 'links': {'ids': ['dm-name-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z', 'dm-uuid-CRYPT-LUKS2-dcdf938a1e004f8cba3216bd01cbd7b7-spWxSZ-24uY-RMSO-uc8e-f4Xb-sQux-3ozB1z'], 'uuids': [], 'labels': [], 'masters': []}, 'vendor': None, 'model': None, 'sas_address': None, 'sas_device_handle': None, 'removable': '0', 'support_discard': '4096', 'partitions': {}, 'rotational': '1', 'scheduler_mode': '', 'sectors': '41902080', 'sectorsize': '512', 'size': '19.98 GB', 'host': '', 'holders': []}}, 'ansible_loop_var': 'item'})  2026-02-15 07:04:01.446367 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:01.446381 | orchestrator | 2026-02-15 07:04:01.446393 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-02-15 07:04:01.446405 | orchestrator | Sunday 15 February 2026 07:03:48 +0000 (0:00:01.456) 1:10:27.044 ******* 2026-02-15 07:04:01.446416 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:01.446427 | orchestrator | 2026-02-15 07:04:01.446438 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-02-15 07:04:01.446449 | orchestrator | Sunday 15 February 2026 07:03:50 +0000 (0:00:01.585) 1:10:28.630 ******* 2026-02-15 07:04:01.446460 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:01.446470 | orchestrator | 2026-02-15 07:04:01.446481 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 07:04:01.446491 | orchestrator | Sunday 15 February 2026 07:03:51 +0000 (0:00:01.119) 1:10:29.750 ******* 2026-02-15 07:04:01.446502 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:01.446513 | orchestrator | 2026-02-15 07:04:01.446523 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 07:04:01.446537 | orchestrator | Sunday 15 February 2026 07:03:53 +0000 (0:00:01.498) 1:10:31.248 ******* 2026-02-15 07:04:01.446550 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:01.446563 | orchestrator | 2026-02-15 07:04:01.446575 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-02-15 07:04:01.446587 | orchestrator | Sunday 15 February 2026 07:03:54 +0000 (0:00:01.181) 1:10:32.430 ******* 2026-02-15 07:04:01.446600 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:01.446613 | orchestrator | 2026-02-15 07:04:01.446626 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-02-15 07:04:01.446639 | orchestrator | Sunday 15 February 2026 07:03:55 +0000 (0:00:01.303) 1:10:33.733 ******* 2026-02-15 07:04:01.446652 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:01.446665 | orchestrator | 2026-02-15 07:04:01.446678 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-02-15 07:04:01.446691 | orchestrator | Sunday 15 February 2026 07:03:56 +0000 (0:00:01.176) 1:10:34.910 ******* 2026-02-15 07:04:01.446703 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-02-15 07:04:01.446717 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-02-15 07:04:01.446730 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-02-15 07:04:01.446749 | orchestrator | 2026-02-15 07:04:01.446762 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-02-15 07:04:01.446775 | orchestrator | Sunday 15 February 2026 07:03:58 +0000 (0:00:02.025) 1:10:36.936 ******* 2026-02-15 07:04:01.446788 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-02-15 07:04:01.446800 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-02-15 07:04:01.446813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-02-15 07:04:01.446826 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:01.446839 | orchestrator | 2026-02-15 07:04:01.446852 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-02-15 07:04:01.446865 | orchestrator | Sunday 15 February 2026 07:04:00 +0000 (0:00:01.216) 1:10:38.153 ******* 2026-02-15 07:04:01.446878 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-5 2026-02-15 07:04:01.446891 | orchestrator | 2026-02-15 07:04:01.446909 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 07:04:43.760900 | orchestrator | Sunday 15 February 2026 07:04:01 +0000 (0:00:01.383) 1:10:39.536 ******* 2026-02-15 07:04:43.761018 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.761036 | orchestrator | 2026-02-15 07:04:43.761049 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 07:04:43.761061 | orchestrator | Sunday 15 February 2026 07:04:02 +0000 (0:00:01.145) 1:10:40.682 ******* 2026-02-15 07:04:43.761072 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.761083 | orchestrator | 2026-02-15 07:04:43.761094 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 07:04:43.761106 | orchestrator | Sunday 15 February 2026 07:04:03 +0000 (0:00:01.246) 1:10:41.928 ******* 2026-02-15 07:04:43.761167 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.761179 | orchestrator | 2026-02-15 07:04:43.761190 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 07:04:43.761201 | orchestrator | Sunday 15 February 2026 07:04:04 +0000 (0:00:01.152) 1:10:43.081 ******* 2026-02-15 07:04:43.761212 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:43.761224 | orchestrator | 2026-02-15 07:04:43.761235 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 07:04:43.761246 | orchestrator | Sunday 15 February 2026 07:04:06 +0000 (0:00:01.230) 1:10:44.311 ******* 2026-02-15 07:04:43.761258 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 07:04:43.761269 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 07:04:43.761280 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 07:04:43.761291 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.761302 | orchestrator | 2026-02-15 07:04:43.761313 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 07:04:43.761324 | orchestrator | Sunday 15 February 2026 07:04:07 +0000 (0:00:01.488) 1:10:45.799 ******* 2026-02-15 07:04:43.761335 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 07:04:43.761346 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 07:04:43.761358 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 07:04:43.761369 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.761379 | orchestrator | 2026-02-15 07:04:43.761395 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 07:04:43.761407 | orchestrator | Sunday 15 February 2026 07:04:09 +0000 (0:00:01.421) 1:10:47.221 ******* 2026-02-15 07:04:43.761435 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 07:04:43.761448 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 07:04:43.761460 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 07:04:43.761472 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.761510 | orchestrator | 2026-02-15 07:04:43.761524 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 07:04:43.761536 | orchestrator | Sunday 15 February 2026 07:04:10 +0000 (0:00:01.465) 1:10:48.687 ******* 2026-02-15 07:04:43.761549 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:43.761561 | orchestrator | 2026-02-15 07:04:43.761575 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 07:04:43.761587 | orchestrator | Sunday 15 February 2026 07:04:11 +0000 (0:00:01.183) 1:10:49.870 ******* 2026-02-15 07:04:43.761599 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-15 07:04:43.761612 | orchestrator | 2026-02-15 07:04:43.761625 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-02-15 07:04:43.761637 | orchestrator | Sunday 15 February 2026 07:04:13 +0000 (0:00:01.334) 1:10:51.205 ******* 2026-02-15 07:04:43.761650 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 07:04:43.761663 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 07:04:43.761675 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 07:04:43.761687 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 07:04:43.761698 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 07:04:43.761709 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-15 07:04:43.761720 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 07:04:43.761731 | orchestrator | 2026-02-15 07:04:43.761741 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-02-15 07:04:43.761752 | orchestrator | Sunday 15 February 2026 07:04:15 +0000 (0:00:02.225) 1:10:53.431 ******* 2026-02-15 07:04:43.761763 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-02-15 07:04:43.761773 | orchestrator | ok: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-02-15 07:04:43.761784 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-02-15 07:04:43.761794 | orchestrator | ok: [testbed-node-5 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2026-02-15 07:04:43.761805 | orchestrator | ok: [testbed-node-5 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-02-15 07:04:43.761816 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-5) 2026-02-15 07:04:43.761827 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-02-15 07:04:43.761838 | orchestrator | 2026-02-15 07:04:43.761848 | orchestrator | TASK [Stop ceph rgw when upgrading from stable-3.2] **************************** 2026-02-15 07:04:43.761859 | orchestrator | Sunday 15 February 2026 07:04:17 +0000 (0:00:02.381) 1:10:55.812 ******* 2026-02-15 07:04:43.761870 | orchestrator | changed: [testbed-node-5] 2026-02-15 07:04:43.761881 | orchestrator | 2026-02-15 07:04:43.761909 | orchestrator | TASK [Stop ceph rgw (pt. 1)] *************************************************** 2026-02-15 07:04:43.761920 | orchestrator | Sunday 15 February 2026 07:04:19 +0000 (0:00:02.013) 1:10:57.826 ******* 2026-02-15 07:04:43.761932 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 07:04:43.761944 | orchestrator | 2026-02-15 07:04:43.761955 | orchestrator | TASK [Stop ceph rgw (pt. 2)] *************************************************** 2026-02-15 07:04:43.761966 | orchestrator | Sunday 15 February 2026 07:04:22 +0000 (0:00:02.631) 1:11:00.458 ******* 2026-02-15 07:04:43.761976 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 07:04:43.761987 | orchestrator | 2026-02-15 07:04:43.761997 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 07:04:43.762008 | orchestrator | Sunday 15 February 2026 07:04:24 +0000 (0:00:01.933) 1:11:02.392 ******* 2026-02-15 07:04:43.762094 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-5 2026-02-15 07:04:43.762106 | orchestrator | 2026-02-15 07:04:43.762142 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 07:04:43.762153 | orchestrator | Sunday 15 February 2026 07:04:25 +0000 (0:00:01.174) 1:11:03.567 ******* 2026-02-15 07:04:43.762164 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-5 2026-02-15 07:04:43.762175 | orchestrator | 2026-02-15 07:04:43.762186 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 07:04:43.762196 | orchestrator | Sunday 15 February 2026 07:04:26 +0000 (0:00:01.148) 1:11:04.715 ******* 2026-02-15 07:04:43.762207 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.762218 | orchestrator | 2026-02-15 07:04:43.762228 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 07:04:43.762239 | orchestrator | Sunday 15 February 2026 07:04:27 +0000 (0:00:01.114) 1:11:05.830 ******* 2026-02-15 07:04:43.762249 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:43.762260 | orchestrator | 2026-02-15 07:04:43.762271 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 07:04:43.762281 | orchestrator | Sunday 15 February 2026 07:04:29 +0000 (0:00:01.529) 1:11:07.360 ******* 2026-02-15 07:04:43.762292 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:43.762303 | orchestrator | 2026-02-15 07:04:43.762320 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 07:04:43.762331 | orchestrator | Sunday 15 February 2026 07:04:30 +0000 (0:00:01.559) 1:11:08.919 ******* 2026-02-15 07:04:43.762341 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:43.762352 | orchestrator | 2026-02-15 07:04:43.762363 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 07:04:43.762374 | orchestrator | Sunday 15 February 2026 07:04:32 +0000 (0:00:01.496) 1:11:10.416 ******* 2026-02-15 07:04:43.762384 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.762395 | orchestrator | 2026-02-15 07:04:43.762406 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 07:04:43.762416 | orchestrator | Sunday 15 February 2026 07:04:33 +0000 (0:00:01.123) 1:11:11.540 ******* 2026-02-15 07:04:43.762427 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.762438 | orchestrator | 2026-02-15 07:04:43.762448 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 07:04:43.762459 | orchestrator | Sunday 15 February 2026 07:04:34 +0000 (0:00:01.175) 1:11:12.715 ******* 2026-02-15 07:04:43.762470 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.762480 | orchestrator | 2026-02-15 07:04:43.762491 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 07:04:43.762502 | orchestrator | Sunday 15 February 2026 07:04:35 +0000 (0:00:01.192) 1:11:13.908 ******* 2026-02-15 07:04:43.762512 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:43.762523 | orchestrator | 2026-02-15 07:04:43.762534 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 07:04:43.762544 | orchestrator | Sunday 15 February 2026 07:04:37 +0000 (0:00:01.536) 1:11:15.444 ******* 2026-02-15 07:04:43.762555 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:43.762566 | orchestrator | 2026-02-15 07:04:43.762576 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 07:04:43.762587 | orchestrator | Sunday 15 February 2026 07:04:38 +0000 (0:00:01.621) 1:11:17.066 ******* 2026-02-15 07:04:43.762598 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.762608 | orchestrator | 2026-02-15 07:04:43.762619 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 07:04:43.762630 | orchestrator | Sunday 15 February 2026 07:04:39 +0000 (0:00:00.759) 1:11:17.826 ******* 2026-02-15 07:04:43.762640 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.762651 | orchestrator | 2026-02-15 07:04:43.762661 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 07:04:43.762679 | orchestrator | Sunday 15 February 2026 07:04:40 +0000 (0:00:00.794) 1:11:18.621 ******* 2026-02-15 07:04:43.762690 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:43.762700 | orchestrator | 2026-02-15 07:04:43.762711 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 07:04:43.762722 | orchestrator | Sunday 15 February 2026 07:04:41 +0000 (0:00:00.801) 1:11:19.422 ******* 2026-02-15 07:04:43.762732 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:43.762743 | orchestrator | 2026-02-15 07:04:43.762753 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 07:04:43.762764 | orchestrator | Sunday 15 February 2026 07:04:42 +0000 (0:00:00.808) 1:11:20.231 ******* 2026-02-15 07:04:43.762775 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:04:43.762785 | orchestrator | 2026-02-15 07:04:43.762796 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 07:04:43.762806 | orchestrator | Sunday 15 February 2026 07:04:42 +0000 (0:00:00.835) 1:11:21.066 ******* 2026-02-15 07:04:43.762817 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:04:43.762828 | orchestrator | 2026-02-15 07:04:43.762846 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 07:05:24.816556 | orchestrator | Sunday 15 February 2026 07:04:43 +0000 (0:00:00.783) 1:11:21.850 ******* 2026-02-15 07:05:24.816634 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816641 | orchestrator | 2026-02-15 07:05:24.816646 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 07:05:24.816650 | orchestrator | Sunday 15 February 2026 07:04:44 +0000 (0:00:00.951) 1:11:22.801 ******* 2026-02-15 07:05:24.816654 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816658 | orchestrator | 2026-02-15 07:05:24.816662 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 07:05:24.816666 | orchestrator | Sunday 15 February 2026 07:04:45 +0000 (0:00:00.792) 1:11:23.594 ******* 2026-02-15 07:05:24.816670 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:05:24.816675 | orchestrator | 2026-02-15 07:05:24.816694 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 07:05:24.816698 | orchestrator | Sunday 15 February 2026 07:04:46 +0000 (0:00:00.817) 1:11:24.412 ******* 2026-02-15 07:05:24.816701 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:05:24.816706 | orchestrator | 2026-02-15 07:05:24.816709 | orchestrator | TASK [ceph-common : Include configure_repository.yml] ************************** 2026-02-15 07:05:24.816714 | orchestrator | Sunday 15 February 2026 07:04:47 +0000 (0:00:00.817) 1:11:25.230 ******* 2026-02-15 07:05:24.816717 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816721 | orchestrator | 2026-02-15 07:05:24.816725 | orchestrator | TASK [ceph-common : Include installs/install_redhat_packages.yml] ************** 2026-02-15 07:05:24.816729 | orchestrator | Sunday 15 February 2026 07:04:47 +0000 (0:00:00.844) 1:11:26.074 ******* 2026-02-15 07:05:24.816733 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816737 | orchestrator | 2026-02-15 07:05:24.816740 | orchestrator | TASK [ceph-common : Include installs/install_suse_packages.yml] **************** 2026-02-15 07:05:24.816744 | orchestrator | Sunday 15 February 2026 07:04:48 +0000 (0:00:00.766) 1:11:26.841 ******* 2026-02-15 07:05:24.816748 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816752 | orchestrator | 2026-02-15 07:05:24.816756 | orchestrator | TASK [ceph-common : Include installs/install_on_debian.yml] ******************** 2026-02-15 07:05:24.816760 | orchestrator | Sunday 15 February 2026 07:04:49 +0000 (0:00:00.792) 1:11:27.633 ******* 2026-02-15 07:05:24.816763 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816767 | orchestrator | 2026-02-15 07:05:24.816771 | orchestrator | TASK [ceph-common : Include_tasks installs/install_on_clear.yml] *************** 2026-02-15 07:05:24.816785 | orchestrator | Sunday 15 February 2026 07:04:50 +0000 (0:00:00.783) 1:11:28.417 ******* 2026-02-15 07:05:24.816789 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816793 | orchestrator | 2026-02-15 07:05:24.816797 | orchestrator | TASK [ceph-common : Get ceph version] ****************************************** 2026-02-15 07:05:24.816817 | orchestrator | Sunday 15 February 2026 07:04:51 +0000 (0:00:00.775) 1:11:29.193 ******* 2026-02-15 07:05:24.816821 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816825 | orchestrator | 2026-02-15 07:05:24.816829 | orchestrator | TASK [ceph-common : Set_fact ceph_version] ************************************* 2026-02-15 07:05:24.816833 | orchestrator | Sunday 15 February 2026 07:04:51 +0000 (0:00:00.864) 1:11:30.057 ******* 2026-02-15 07:05:24.816836 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816840 | orchestrator | 2026-02-15 07:05:24.816844 | orchestrator | TASK [ceph-common : Set_fact ceph_release - override ceph_release with ceph_stable_release] *** 2026-02-15 07:05:24.816849 | orchestrator | Sunday 15 February 2026 07:04:52 +0000 (0:00:00.803) 1:11:30.861 ******* 2026-02-15 07:05:24.816852 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816856 | orchestrator | 2026-02-15 07:05:24.816860 | orchestrator | TASK [ceph-common : Include create_rbd_client_dir.yml] ************************* 2026-02-15 07:05:24.816864 | orchestrator | Sunday 15 February 2026 07:04:53 +0000 (0:00:00.799) 1:11:31.660 ******* 2026-02-15 07:05:24.816867 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816871 | orchestrator | 2026-02-15 07:05:24.816875 | orchestrator | TASK [ceph-common : Include configure_cluster_name.yml] ************************ 2026-02-15 07:05:24.816879 | orchestrator | Sunday 15 February 2026 07:04:54 +0000 (0:00:00.811) 1:11:32.472 ******* 2026-02-15 07:05:24.816883 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816886 | orchestrator | 2026-02-15 07:05:24.816890 | orchestrator | TASK [ceph-common : Include configure_memory_allocator.yml] ******************** 2026-02-15 07:05:24.816894 | orchestrator | Sunday 15 February 2026 07:04:55 +0000 (0:00:00.848) 1:11:33.320 ******* 2026-02-15 07:05:24.816898 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816902 | orchestrator | 2026-02-15 07:05:24.816905 | orchestrator | TASK [ceph-common : Include selinux.yml] *************************************** 2026-02-15 07:05:24.816909 | orchestrator | Sunday 15 February 2026 07:04:55 +0000 (0:00:00.770) 1:11:34.091 ******* 2026-02-15 07:05:24.816913 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816917 | orchestrator | 2026-02-15 07:05:24.816920 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-02-15 07:05:24.816924 | orchestrator | Sunday 15 February 2026 07:04:56 +0000 (0:00:00.781) 1:11:34.872 ******* 2026-02-15 07:05:24.816928 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:05:24.816932 | orchestrator | 2026-02-15 07:05:24.816935 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-02-15 07:05:24.816939 | orchestrator | Sunday 15 February 2026 07:04:58 +0000 (0:00:01.601) 1:11:36.474 ******* 2026-02-15 07:05:24.816943 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:05:24.816947 | orchestrator | 2026-02-15 07:05:24.816950 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-02-15 07:05:24.816954 | orchestrator | Sunday 15 February 2026 07:05:00 +0000 (0:00:01.869) 1:11:38.343 ******* 2026-02-15 07:05:24.816958 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-5 2026-02-15 07:05:24.816963 | orchestrator | 2026-02-15 07:05:24.816967 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-02-15 07:05:24.816971 | orchestrator | Sunday 15 February 2026 07:05:01 +0000 (0:00:01.175) 1:11:39.519 ******* 2026-02-15 07:05:24.816975 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.816979 | orchestrator | 2026-02-15 07:05:24.816982 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-02-15 07:05:24.816995 | orchestrator | Sunday 15 February 2026 07:05:02 +0000 (0:00:01.181) 1:11:40.701 ******* 2026-02-15 07:05:24.816999 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817003 | orchestrator | 2026-02-15 07:05:24.817007 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-02-15 07:05:24.817011 | orchestrator | Sunday 15 February 2026 07:05:03 +0000 (0:00:01.141) 1:11:41.843 ******* 2026-02-15 07:05:24.817014 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-02-15 07:05:24.817023 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-02-15 07:05:24.817027 | orchestrator | 2026-02-15 07:05:24.817030 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-02-15 07:05:24.817034 | orchestrator | Sunday 15 February 2026 07:05:05 +0000 (0:00:01.959) 1:11:43.802 ******* 2026-02-15 07:05:24.817038 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:05:24.817042 | orchestrator | 2026-02-15 07:05:24.817045 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-02-15 07:05:24.817049 | orchestrator | Sunday 15 February 2026 07:05:07 +0000 (0:00:01.447) 1:11:45.249 ******* 2026-02-15 07:05:24.817053 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817057 | orchestrator | 2026-02-15 07:05:24.817060 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-02-15 07:05:24.817064 | orchestrator | Sunday 15 February 2026 07:05:08 +0000 (0:00:01.215) 1:11:46.464 ******* 2026-02-15 07:05:24.817068 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817071 | orchestrator | 2026-02-15 07:05:24.817075 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-02-15 07:05:24.817079 | orchestrator | Sunday 15 February 2026 07:05:09 +0000 (0:00:00.805) 1:11:47.270 ******* 2026-02-15 07:05:24.817083 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817086 | orchestrator | 2026-02-15 07:05:24.817090 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-02-15 07:05:24.817094 | orchestrator | Sunday 15 February 2026 07:05:09 +0000 (0:00:00.780) 1:11:48.050 ******* 2026-02-15 07:05:24.817097 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-5 2026-02-15 07:05:24.817101 | orchestrator | 2026-02-15 07:05:24.817105 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-02-15 07:05:24.817108 | orchestrator | Sunday 15 February 2026 07:05:11 +0000 (0:00:01.140) 1:11:49.191 ******* 2026-02-15 07:05:24.817115 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:05:24.817119 | orchestrator | 2026-02-15 07:05:24.817122 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-02-15 07:05:24.817126 | orchestrator | Sunday 15 February 2026 07:05:12 +0000 (0:00:01.788) 1:11:50.979 ******* 2026-02-15 07:05:24.817130 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-02-15 07:05:24.817134 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-02-15 07:05:24.817138 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-02-15 07:05:24.817162 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817168 | orchestrator | 2026-02-15 07:05:24.817175 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-02-15 07:05:24.817182 | orchestrator | Sunday 15 February 2026 07:05:14 +0000 (0:00:01.167) 1:11:52.147 ******* 2026-02-15 07:05:24.817188 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817195 | orchestrator | 2026-02-15 07:05:24.817201 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-02-15 07:05:24.817208 | orchestrator | Sunday 15 February 2026 07:05:15 +0000 (0:00:01.146) 1:11:53.294 ******* 2026-02-15 07:05:24.817212 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817217 | orchestrator | 2026-02-15 07:05:24.817221 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-02-15 07:05:24.817225 | orchestrator | Sunday 15 February 2026 07:05:16 +0000 (0:00:01.232) 1:11:54.527 ******* 2026-02-15 07:05:24.817229 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817234 | orchestrator | 2026-02-15 07:05:24.817238 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-02-15 07:05:24.817242 | orchestrator | Sunday 15 February 2026 07:05:17 +0000 (0:00:01.135) 1:11:55.662 ******* 2026-02-15 07:05:24.817247 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817255 | orchestrator | 2026-02-15 07:05:24.817259 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-02-15 07:05:24.817263 | orchestrator | Sunday 15 February 2026 07:05:18 +0000 (0:00:01.155) 1:11:56.818 ******* 2026-02-15 07:05:24.817268 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817272 | orchestrator | 2026-02-15 07:05:24.817276 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-02-15 07:05:24.817281 | orchestrator | Sunday 15 February 2026 07:05:19 +0000 (0:00:00.838) 1:11:57.656 ******* 2026-02-15 07:05:24.817285 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:05:24.817290 | orchestrator | 2026-02-15 07:05:24.817294 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-02-15 07:05:24.817298 | orchestrator | Sunday 15 February 2026 07:05:21 +0000 (0:00:02.062) 1:11:59.719 ******* 2026-02-15 07:05:24.817303 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:05:24.817307 | orchestrator | 2026-02-15 07:05:24.817311 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-02-15 07:05:24.817315 | orchestrator | Sunday 15 February 2026 07:05:22 +0000 (0:00:00.829) 1:12:00.548 ******* 2026-02-15 07:05:24.817319 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-5 2026-02-15 07:05:24.817324 | orchestrator | 2026-02-15 07:05:24.817328 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-02-15 07:05:24.817332 | orchestrator | Sunday 15 February 2026 07:05:23 +0000 (0:00:01.173) 1:12:01.721 ******* 2026-02-15 07:05:24.817337 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:05:24.817341 | orchestrator | 2026-02-15 07:05:24.817345 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-02-15 07:05:24.817352 | orchestrator | Sunday 15 February 2026 07:05:24 +0000 (0:00:01.177) 1:12:02.899 ******* 2026-02-15 07:06:06.913528 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.913644 | orchestrator | 2026-02-15 07:06:06.913661 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-02-15 07:06:06.913674 | orchestrator | Sunday 15 February 2026 07:05:25 +0000 (0:00:01.158) 1:12:04.057 ******* 2026-02-15 07:06:06.913686 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.913697 | orchestrator | 2026-02-15 07:06:06.913709 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-02-15 07:06:06.913720 | orchestrator | Sunday 15 February 2026 07:05:27 +0000 (0:00:01.266) 1:12:05.324 ******* 2026-02-15 07:06:06.913731 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.913742 | orchestrator | 2026-02-15 07:06:06.913752 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-02-15 07:06:06.913780 | orchestrator | Sunday 15 February 2026 07:05:28 +0000 (0:00:01.159) 1:12:06.483 ******* 2026-02-15 07:06:06.913793 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.913804 | orchestrator | 2026-02-15 07:06:06.913827 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-02-15 07:06:06.913838 | orchestrator | Sunday 15 February 2026 07:05:29 +0000 (0:00:01.346) 1:12:07.830 ******* 2026-02-15 07:06:06.913849 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.913860 | orchestrator | 2026-02-15 07:06:06.913871 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-02-15 07:06:06.913882 | orchestrator | Sunday 15 February 2026 07:05:30 +0000 (0:00:01.205) 1:12:09.036 ******* 2026-02-15 07:06:06.913893 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.913904 | orchestrator | 2026-02-15 07:06:06.913914 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-02-15 07:06:06.913925 | orchestrator | Sunday 15 February 2026 07:05:32 +0000 (0:00:01.143) 1:12:10.180 ******* 2026-02-15 07:06:06.913936 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.913947 | orchestrator | 2026-02-15 07:06:06.913958 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-02-15 07:06:06.913969 | orchestrator | Sunday 15 February 2026 07:05:33 +0000 (0:00:01.238) 1:12:11.418 ******* 2026-02-15 07:06:06.914005 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:06:06.914082 | orchestrator | 2026-02-15 07:06:06.914102 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-02-15 07:06:06.914139 | orchestrator | Sunday 15 February 2026 07:05:34 +0000 (0:00:00.841) 1:12:12.260 ******* 2026-02-15 07:06:06.914159 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-5 2026-02-15 07:06:06.914210 | orchestrator | 2026-02-15 07:06:06.914230 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-02-15 07:06:06.914248 | orchestrator | Sunday 15 February 2026 07:05:35 +0000 (0:00:01.128) 1:12:13.389 ******* 2026-02-15 07:06:06.914265 | orchestrator | ok: [testbed-node-5] => (item=/etc/ceph) 2026-02-15 07:06:06.914298 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/) 2026-02-15 07:06:06.914318 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-02-15 07:06:06.914337 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-02-15 07:06:06.914356 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-02-15 07:06:06.914377 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-02-15 07:06:06.914396 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-02-15 07:06:06.914414 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-02-15 07:06:06.914434 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-02-15 07:06:06.914447 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-02-15 07:06:06.914458 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-02-15 07:06:06.914469 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-02-15 07:06:06.914480 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-02-15 07:06:06.914491 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-02-15 07:06:06.914502 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2026-02-15 07:06:06.914513 | orchestrator | ok: [testbed-node-5] => (item=/var/log/ceph) 2026-02-15 07:06:06.914523 | orchestrator | 2026-02-15 07:06:06.914534 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-02-15 07:06:06.914545 | orchestrator | Sunday 15 February 2026 07:05:41 +0000 (0:00:06.270) 1:12:19.660 ******* 2026-02-15 07:06:06.914556 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-5 2026-02-15 07:06:06.914566 | orchestrator | 2026-02-15 07:06:06.914577 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-02-15 07:06:06.914588 | orchestrator | Sunday 15 February 2026 07:05:42 +0000 (0:00:01.122) 1:12:20.783 ******* 2026-02-15 07:06:06.914599 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 07:06:06.914611 | orchestrator | 2026-02-15 07:06:06.914622 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-02-15 07:06:06.914633 | orchestrator | Sunday 15 February 2026 07:05:44 +0000 (0:00:01.502) 1:12:22.285 ******* 2026-02-15 07:06:06.914644 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 07:06:06.914655 | orchestrator | 2026-02-15 07:06:06.914666 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-02-15 07:06:06.914677 | orchestrator | Sunday 15 February 2026 07:05:46 +0000 (0:00:02.138) 1:12:24.424 ******* 2026-02-15 07:06:06.914687 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.914698 | orchestrator | 2026-02-15 07:06:06.914709 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-02-15 07:06:06.914741 | orchestrator | Sunday 15 February 2026 07:05:47 +0000 (0:00:00.769) 1:12:25.193 ******* 2026-02-15 07:06:06.914753 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.914764 | orchestrator | 2026-02-15 07:06:06.914775 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-02-15 07:06:06.914799 | orchestrator | Sunday 15 February 2026 07:05:47 +0000 (0:00:00.793) 1:12:25.986 ******* 2026-02-15 07:06:06.914809 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.914820 | orchestrator | 2026-02-15 07:06:06.914831 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-02-15 07:06:06.914842 | orchestrator | Sunday 15 February 2026 07:05:48 +0000 (0:00:00.786) 1:12:26.773 ******* 2026-02-15 07:06:06.914852 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.914863 | orchestrator | 2026-02-15 07:06:06.914874 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-02-15 07:06:06.914885 | orchestrator | Sunday 15 February 2026 07:05:49 +0000 (0:00:00.829) 1:12:27.602 ******* 2026-02-15 07:06:06.914896 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.914906 | orchestrator | 2026-02-15 07:06:06.914917 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-02-15 07:06:06.914928 | orchestrator | Sunday 15 February 2026 07:05:50 +0000 (0:00:00.812) 1:12:28.415 ******* 2026-02-15 07:06:06.914939 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.914949 | orchestrator | 2026-02-15 07:06:06.914960 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-02-15 07:06:06.914971 | orchestrator | Sunday 15 February 2026 07:05:51 +0000 (0:00:00.793) 1:12:29.208 ******* 2026-02-15 07:06:06.914981 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.914992 | orchestrator | 2026-02-15 07:06:06.915002 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-02-15 07:06:06.915013 | orchestrator | Sunday 15 February 2026 07:05:51 +0000 (0:00:00.807) 1:12:30.016 ******* 2026-02-15 07:06:06.915024 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.915035 | orchestrator | 2026-02-15 07:06:06.915046 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-02-15 07:06:06.915056 | orchestrator | Sunday 15 February 2026 07:05:52 +0000 (0:00:00.798) 1:12:30.814 ******* 2026-02-15 07:06:06.915067 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.915078 | orchestrator | 2026-02-15 07:06:06.915096 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-02-15 07:06:06.915107 | orchestrator | Sunday 15 February 2026 07:05:53 +0000 (0:00:00.780) 1:12:31.595 ******* 2026-02-15 07:06:06.915118 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.915128 | orchestrator | 2026-02-15 07:06:06.915139 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-02-15 07:06:06.915150 | orchestrator | Sunday 15 February 2026 07:05:54 +0000 (0:00:00.885) 1:12:32.481 ******* 2026-02-15 07:06:06.915161 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.915171 | orchestrator | 2026-02-15 07:06:06.915219 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-02-15 07:06:06.915230 | orchestrator | Sunday 15 February 2026 07:05:55 +0000 (0:00:00.822) 1:12:33.303 ******* 2026-02-15 07:06:06.915241 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] 2026-02-15 07:06:06.915252 | orchestrator | 2026-02-15 07:06:06.915263 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-02-15 07:06:06.915274 | orchestrator | Sunday 15 February 2026 07:05:59 +0000 (0:00:03.981) 1:12:37.284 ******* 2026-02-15 07:06:06.915284 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 07:06:06.915295 | orchestrator | 2026-02-15 07:06:06.915306 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-02-15 07:06:06.915317 | orchestrator | Sunday 15 February 2026 07:06:00 +0000 (0:00:00.980) 1:12:38.265 ******* 2026-02-15 07:06:06.915330 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}]) 2026-02-15 07:06:06.915352 | orchestrator | ok: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}]) 2026-02-15 07:06:06.915364 | orchestrator | 2026-02-15 07:06:06.915375 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-02-15 07:06:06.915386 | orchestrator | Sunday 15 February 2026 07:06:04 +0000 (0:00:04.365) 1:12:42.630 ******* 2026-02-15 07:06:06.915396 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.915407 | orchestrator | 2026-02-15 07:06:06.915418 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-02-15 07:06:06.915428 | orchestrator | Sunday 15 February 2026 07:06:05 +0000 (0:00:00.808) 1:12:43.439 ******* 2026-02-15 07:06:06.915439 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.915450 | orchestrator | 2026-02-15 07:06:06.915461 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-02-15 07:06:06.915471 | orchestrator | Sunday 15 February 2026 07:06:06 +0000 (0:00:00.767) 1:12:44.206 ******* 2026-02-15 07:06:06.915482 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:06:06.915493 | orchestrator | 2026-02-15 07:06:06.915504 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-02-15 07:06:06.915522 | orchestrator | Sunday 15 February 2026 07:06:06 +0000 (0:00:00.796) 1:12:45.003 ******* 2026-02-15 07:07:12.980592 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:07:12.980694 | orchestrator | 2026-02-15 07:07:12.980707 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-02-15 07:07:12.980717 | orchestrator | Sunday 15 February 2026 07:06:07 +0000 (0:00:00.808) 1:12:45.811 ******* 2026-02-15 07:07:12.980725 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:07:12.980734 | orchestrator | 2026-02-15 07:07:12.980742 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-02-15 07:07:12.980751 | orchestrator | Sunday 15 February 2026 07:06:08 +0000 (0:00:00.850) 1:12:46.661 ******* 2026-02-15 07:07:12.980759 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:07:12.980768 | orchestrator | 2026-02-15 07:07:12.980776 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-02-15 07:07:12.980784 | orchestrator | Sunday 15 February 2026 07:06:09 +0000 (0:00:00.878) 1:12:47.540 ******* 2026-02-15 07:07:12.980793 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 07:07:12.980802 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 07:07:12.980810 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 07:07:12.980818 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:07:12.980826 | orchestrator | 2026-02-15 07:07:12.980834 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-02-15 07:07:12.980842 | orchestrator | Sunday 15 February 2026 07:06:10 +0000 (0:00:01.211) 1:12:48.752 ******* 2026-02-15 07:07:12.980850 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 07:07:12.980858 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 07:07:12.980867 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 07:07:12.980874 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:07:12.980883 | orchestrator | 2026-02-15 07:07:12.980891 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-02-15 07:07:12.980899 | orchestrator | Sunday 15 February 2026 07:06:11 +0000 (0:00:01.072) 1:12:49.824 ******* 2026-02-15 07:07:12.980907 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-02-15 07:07:12.980915 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-02-15 07:07:12.980956 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-02-15 07:07:12.980966 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:07:12.980974 | orchestrator | 2026-02-15 07:07:12.980982 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-02-15 07:07:12.980990 | orchestrator | Sunday 15 February 2026 07:06:12 +0000 (0:00:01.117) 1:12:50.942 ******* 2026-02-15 07:07:12.980997 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:07:12.981005 | orchestrator | 2026-02-15 07:07:12.981013 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-02-15 07:07:12.981021 | orchestrator | Sunday 15 February 2026 07:06:13 +0000 (0:00:00.826) 1:12:51.768 ******* 2026-02-15 07:07:12.981029 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-02-15 07:07:12.981037 | orchestrator | 2026-02-15 07:07:12.981045 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-02-15 07:07:12.981053 | orchestrator | Sunday 15 February 2026 07:06:15 +0000 (0:00:01.565) 1:12:53.334 ******* 2026-02-15 07:07:12.981061 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:07:12.981068 | orchestrator | 2026-02-15 07:07:12.981076 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-02-15 07:07:12.981085 | orchestrator | Sunday 15 February 2026 07:06:16 +0000 (0:00:01.432) 1:12:54.767 ******* 2026-02-15 07:07:12.981093 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-5 2026-02-15 07:07:12.981101 | orchestrator | 2026-02-15 07:07:12.981109 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-15 07:07:12.981117 | orchestrator | Sunday 15 February 2026 07:06:17 +0000 (0:00:01.101) 1:12:55.869 ******* 2026-02-15 07:07:12.981125 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 07:07:12.981133 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-15 07:07:12.981143 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 07:07:12.981153 | orchestrator | 2026-02-15 07:07:12.981162 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-15 07:07:12.981172 | orchestrator | Sunday 15 February 2026 07:06:20 +0000 (0:00:03.201) 1:12:59.070 ******* 2026-02-15 07:07:12.981181 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-15 07:07:12.981191 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-02-15 07:07:12.981201 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:07:12.981210 | orchestrator | 2026-02-15 07:07:12.981236 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-02-15 07:07:12.981245 | orchestrator | Sunday 15 February 2026 07:06:22 +0000 (0:00:01.960) 1:13:01.031 ******* 2026-02-15 07:07:12.981255 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:07:12.981264 | orchestrator | 2026-02-15 07:07:12.981274 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-02-15 07:07:12.981283 | orchestrator | Sunday 15 February 2026 07:06:23 +0000 (0:00:00.801) 1:13:01.832 ******* 2026-02-15 07:07:12.981293 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-5 2026-02-15 07:07:12.981304 | orchestrator | 2026-02-15 07:07:12.981313 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-02-15 07:07:12.981322 | orchestrator | Sunday 15 February 2026 07:06:24 +0000 (0:00:01.146) 1:13:02.979 ******* 2026-02-15 07:07:12.981332 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 07:07:12.981343 | orchestrator | 2026-02-15 07:07:12.981352 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-02-15 07:07:12.981361 | orchestrator | Sunday 15 February 2026 07:06:26 +0000 (0:00:01.592) 1:13:04.571 ******* 2026-02-15 07:07:12.981383 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 07:07:12.981394 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-02-15 07:07:12.981410 | orchestrator | 2026-02-15 07:07:12.981420 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-02-15 07:07:12.981429 | orchestrator | Sunday 15 February 2026 07:06:31 +0000 (0:00:05.117) 1:13:09.689 ******* 2026-02-15 07:07:12.981438 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-02-15 07:07:12.981447 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-02-15 07:07:12.981456 | orchestrator | 2026-02-15 07:07:12.981465 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-02-15 07:07:12.981475 | orchestrator | Sunday 15 February 2026 07:06:34 +0000 (0:00:03.108) 1:13:12.798 ******* 2026-02-15 07:07:12.981484 | orchestrator | ok: [testbed-node-5] => (item=None) 2026-02-15 07:07:12.981494 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:07:12.981502 | orchestrator | 2026-02-15 07:07:12.981510 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-02-15 07:07:12.981518 | orchestrator | Sunday 15 February 2026 07:06:36 +0000 (0:00:01.739) 1:13:14.537 ******* 2026-02-15 07:07:12.981526 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-5 2026-02-15 07:07:12.981534 | orchestrator | 2026-02-15 07:07:12.981542 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-02-15 07:07:12.981550 | orchestrator | Sunday 15 February 2026 07:06:37 +0000 (0:00:01.161) 1:13:15.699 ******* 2026-02-15 07:07:12.981558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:07:12.981566 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:07:12.981579 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:07:12.981587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:07:12.981595 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:07:12.981603 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:07:12.981611 | orchestrator | 2026-02-15 07:07:12.981619 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-02-15 07:07:12.981627 | orchestrator | Sunday 15 February 2026 07:06:39 +0000 (0:00:01.633) 1:13:17.332 ******* 2026-02-15 07:07:12.981635 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:07:12.981643 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:07:12.981651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:07:12.981659 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:07:12.981667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-02-15 07:07:12.981675 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:07:12.981682 | orchestrator | 2026-02-15 07:07:12.981690 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-02-15 07:07:12.981698 | orchestrator | Sunday 15 February 2026 07:06:40 +0000 (0:00:01.656) 1:13:18.989 ******* 2026-02-15 07:07:12.981706 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 07:07:12.981714 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 07:07:12.981728 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 07:07:12.981736 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 07:07:12.981745 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-02-15 07:07:12.981753 | orchestrator | 2026-02-15 07:07:12.981761 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-02-15 07:07:12.981769 | orchestrator | Sunday 15 February 2026 07:07:12 +0000 (0:00:31.302) 1:13:50.292 ******* 2026-02-15 07:07:12.981777 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:07:12.981784 | orchestrator | 2026-02-15 07:07:12.981792 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-02-15 07:07:12.981805 | orchestrator | Sunday 15 February 2026 07:07:12 +0000 (0:00:00.775) 1:13:51.067 ******* 2026-02-15 07:08:07.085803 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:08:07.085919 | orchestrator | 2026-02-15 07:08:07.085935 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-02-15 07:08:07.085949 | orchestrator | Sunday 15 February 2026 07:07:13 +0000 (0:00:00.793) 1:13:51.860 ******* 2026-02-15 07:08:07.085960 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-5 2026-02-15 07:08:07.085972 | orchestrator | 2026-02-15 07:08:07.085984 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-02-15 07:08:07.085996 | orchestrator | Sunday 15 February 2026 07:07:14 +0000 (0:00:01.129) 1:13:52.989 ******* 2026-02-15 07:08:07.086007 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-5 2026-02-15 07:08:07.086069 | orchestrator | 2026-02-15 07:08:07.086083 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-02-15 07:08:07.086094 | orchestrator | Sunday 15 February 2026 07:07:16 +0000 (0:00:01.131) 1:13:54.121 ******* 2026-02-15 07:08:07.086106 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:08:07.086119 | orchestrator | 2026-02-15 07:08:07.086131 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-02-15 07:08:07.086143 | orchestrator | Sunday 15 February 2026 07:07:18 +0000 (0:00:02.057) 1:13:56.179 ******* 2026-02-15 07:08:07.086155 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:08:07.086166 | orchestrator | 2026-02-15 07:08:07.086178 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-02-15 07:08:07.086190 | orchestrator | Sunday 15 February 2026 07:07:20 +0000 (0:00:01.997) 1:13:58.176 ******* 2026-02-15 07:08:07.086202 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:08:07.086214 | orchestrator | 2026-02-15 07:08:07.086226 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-02-15 07:08:07.086238 | orchestrator | Sunday 15 February 2026 07:07:22 +0000 (0:00:02.258) 1:14:00.435 ******* 2026-02-15 07:08:07.086251 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-02-15 07:08:07.086293 | orchestrator | 2026-02-15 07:08:07.086320 | orchestrator | PLAY [Upgrade ceph rbd mirror node] ******************************************** 2026-02-15 07:08:07.086332 | orchestrator | skipping: no hosts matched 2026-02-15 07:08:07.086346 | orchestrator | 2026-02-15 07:08:07.086359 | orchestrator | PLAY [Upgrade ceph nfs node] *************************************************** 2026-02-15 07:08:07.086373 | orchestrator | skipping: no hosts matched 2026-02-15 07:08:07.086387 | orchestrator | 2026-02-15 07:08:07.086400 | orchestrator | PLAY [Upgrade ceph client node] ************************************************ 2026-02-15 07:08:07.086413 | orchestrator | skipping: no hosts matched 2026-02-15 07:08:07.086426 | orchestrator | 2026-02-15 07:08:07.086439 | orchestrator | PLAY [Upgrade ceph-crash daemons] ********************************************** 2026-02-15 07:08:07.086475 | orchestrator | 2026-02-15 07:08:07.086489 | orchestrator | TASK [Stop the ceph-crash service] ********************************************* 2026-02-15 07:08:07.086502 | orchestrator | Sunday 15 February 2026 07:07:26 +0000 (0:00:04.124) 1:14:04.560 ******* 2026-02-15 07:08:07.086515 | orchestrator | changed: [testbed-node-0] 2026-02-15 07:08:07.086529 | orchestrator | changed: [testbed-node-1] 2026-02-15 07:08:07.086542 | orchestrator | changed: [testbed-node-2] 2026-02-15 07:08:07.086555 | orchestrator | changed: [testbed-node-3] 2026-02-15 07:08:07.086567 | orchestrator | changed: [testbed-node-4] 2026-02-15 07:08:07.086580 | orchestrator | changed: [testbed-node-5] 2026-02-15 07:08:07.086593 | orchestrator | 2026-02-15 07:08:07.086606 | orchestrator | TASK [Mask and disable the ceph-crash service] ********************************* 2026-02-15 07:08:07.086619 | orchestrator | Sunday 15 February 2026 07:07:28 +0000 (0:00:02.493) 1:14:07.053 ******* 2026-02-15 07:08:07.086632 | orchestrator | changed: [testbed-node-0] 2026-02-15 07:08:07.086645 | orchestrator | changed: [testbed-node-3] 2026-02-15 07:08:07.086657 | orchestrator | changed: [testbed-node-1] 2026-02-15 07:08:07.086671 | orchestrator | changed: [testbed-node-2] 2026-02-15 07:08:07.086684 | orchestrator | changed: [testbed-node-4] 2026-02-15 07:08:07.086697 | orchestrator | changed: [testbed-node-5] 2026-02-15 07:08:07.086708 | orchestrator | 2026-02-15 07:08:07.086719 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 07:08:07.086730 | orchestrator | Sunday 15 February 2026 07:07:32 +0000 (0:00:03.520) 1:14:10.574 ******* 2026-02-15 07:08:07.086741 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:08:07.086752 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:08:07.086763 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:08:07.086774 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:08:07.086785 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:08:07.086795 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:08:07.086806 | orchestrator | 2026-02-15 07:08:07.086817 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 07:08:07.086827 | orchestrator | Sunday 15 February 2026 07:07:35 +0000 (0:00:02.613) 1:14:13.188 ******* 2026-02-15 07:08:07.086838 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:08:07.086849 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:08:07.086859 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:08:07.086870 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:08:07.086880 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:08:07.086891 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:08:07.086901 | orchestrator | 2026-02-15 07:08:07.086912 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-02-15 07:08:07.086923 | orchestrator | Sunday 15 February 2026 07:07:37 +0000 (0:00:02.163) 1:14:15.351 ******* 2026-02-15 07:08:07.086935 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 07:08:07.086948 | orchestrator | 2026-02-15 07:08:07.086958 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-02-15 07:08:07.086969 | orchestrator | Sunday 15 February 2026 07:07:39 +0000 (0:00:02.096) 1:14:17.448 ******* 2026-02-15 07:08:07.086980 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 07:08:07.086991 | orchestrator | 2026-02-15 07:08:07.087019 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-02-15 07:08:07.087031 | orchestrator | Sunday 15 February 2026 07:07:41 +0000 (0:00:02.314) 1:14:19.762 ******* 2026-02-15 07:08:07.087042 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:08:07.087052 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:08:07.087063 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:08:07.087074 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:08:07.087085 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:08:07.087095 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:08:07.087114 | orchestrator | 2026-02-15 07:08:07.087126 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-02-15 07:08:07.087137 | orchestrator | Sunday 15 February 2026 07:07:44 +0000 (0:00:02.627) 1:14:22.390 ******* 2026-02-15 07:08:07.087148 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:08:07.087159 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:08:07.087169 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:08:07.087180 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:08:07.087191 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:08:07.087202 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:08:07.087213 | orchestrator | 2026-02-15 07:08:07.087224 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-02-15 07:08:07.087234 | orchestrator | Sunday 15 February 2026 07:07:46 +0000 (0:00:02.328) 1:14:24.718 ******* 2026-02-15 07:08:07.087245 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:08:07.087297 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:08:07.087310 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:08:07.087321 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:08:07.087333 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:08:07.087343 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:08:07.087354 | orchestrator | 2026-02-15 07:08:07.087365 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-02-15 07:08:07.087375 | orchestrator | Sunday 15 February 2026 07:07:49 +0000 (0:00:02.528) 1:14:27.247 ******* 2026-02-15 07:08:07.087386 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:08:07.087397 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:08:07.087407 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:08:07.087418 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:08:07.087429 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:08:07.087439 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:08:07.087450 | orchestrator | 2026-02-15 07:08:07.087466 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-02-15 07:08:07.087477 | orchestrator | Sunday 15 February 2026 07:07:51 +0000 (0:00:02.174) 1:14:29.421 ******* 2026-02-15 07:08:07.087488 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:08:07.087499 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:08:07.087509 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:08:07.087520 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:08:07.087531 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:08:07.087541 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:08:07.087552 | orchestrator | 2026-02-15 07:08:07.087563 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-02-15 07:08:07.087573 | orchestrator | Sunday 15 February 2026 07:07:53 +0000 (0:00:02.445) 1:14:31.867 ******* 2026-02-15 07:08:07.087584 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:08:07.087595 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:08:07.087606 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:08:07.087616 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:08:07.087627 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:08:07.087637 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:08:07.087648 | orchestrator | 2026-02-15 07:08:07.087659 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-02-15 07:08:07.087669 | orchestrator | Sunday 15 February 2026 07:07:55 +0000 (0:00:01.900) 1:14:33.767 ******* 2026-02-15 07:08:07.087680 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:08:07.087691 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:08:07.087701 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:08:07.087712 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:08:07.087722 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:08:07.087733 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:08:07.087743 | orchestrator | 2026-02-15 07:08:07.087754 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-02-15 07:08:07.087765 | orchestrator | Sunday 15 February 2026 07:07:57 +0000 (0:00:01.885) 1:14:35.653 ******* 2026-02-15 07:08:07.087784 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:08:07.087795 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:08:07.087805 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:08:07.087816 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:08:07.087827 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:08:07.087837 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:08:07.087848 | orchestrator | 2026-02-15 07:08:07.087858 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-02-15 07:08:07.087869 | orchestrator | Sunday 15 February 2026 07:08:00 +0000 (0:00:02.752) 1:14:38.405 ******* 2026-02-15 07:08:07.087880 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:08:07.087891 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:08:07.087901 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:08:07.087912 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:08:07.087922 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:08:07.087933 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:08:07.087943 | orchestrator | 2026-02-15 07:08:07.087954 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-02-15 07:08:07.087965 | orchestrator | Sunday 15 February 2026 07:08:02 +0000 (0:00:02.438) 1:14:40.844 ******* 2026-02-15 07:08:07.087976 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:08:07.087987 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:08:07.087997 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:08:07.088008 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:08:07.088019 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:08:07.088029 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:08:07.088040 | orchestrator | 2026-02-15 07:08:07.088051 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-02-15 07:08:07.088062 | orchestrator | Sunday 15 February 2026 07:08:05 +0000 (0:00:02.396) 1:14:43.240 ******* 2026-02-15 07:08:07.088072 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:08:07.088083 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:08:07.088094 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:08:07.088105 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:08:07.088116 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:08:07.088127 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:08:07.088137 | orchestrator | 2026-02-15 07:08:07.088155 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-02-15 07:09:03.806462 | orchestrator | Sunday 15 February 2026 07:08:07 +0000 (0:00:01.932) 1:14:45.173 ******* 2026-02-15 07:09:03.806581 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.806598 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:03.806610 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:03.806621 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:09:03.806633 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:09:03.806644 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:09:03.806655 | orchestrator | 2026-02-15 07:09:03.806668 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-02-15 07:09:03.806679 | orchestrator | Sunday 15 February 2026 07:08:09 +0000 (0:00:02.177) 1:14:47.350 ******* 2026-02-15 07:09:03.806690 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.806701 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:03.806712 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:03.806723 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:09:03.806734 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:09:03.806745 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:09:03.806756 | orchestrator | 2026-02-15 07:09:03.806767 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-02-15 07:09:03.806778 | orchestrator | Sunday 15 February 2026 07:08:11 +0000 (0:00:02.029) 1:14:49.380 ******* 2026-02-15 07:09:03.806788 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.806799 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:03.806810 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:03.806821 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:09:03.806854 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:09:03.806866 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:09:03.806877 | orchestrator | 2026-02-15 07:09:03.806888 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-02-15 07:09:03.806899 | orchestrator | Sunday 15 February 2026 07:08:13 +0000 (0:00:02.097) 1:14:51.478 ******* 2026-02-15 07:09:03.806913 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.806925 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:03.806939 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:03.806953 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:09:03.806966 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:09:03.806978 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:09:03.806991 | orchestrator | 2026-02-15 07:09:03.807018 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-02-15 07:09:03.807031 | orchestrator | Sunday 15 February 2026 07:08:15 +0000 (0:00:01.922) 1:14:53.401 ******* 2026-02-15 07:09:03.807042 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.807053 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:03.807064 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:03.807075 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:09:03.807085 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:09:03.807096 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:09:03.807107 | orchestrator | 2026-02-15 07:09:03.807117 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-02-15 07:09:03.807129 | orchestrator | Sunday 15 February 2026 07:08:17 +0000 (0:00:01.954) 1:14:55.355 ******* 2026-02-15 07:09:03.807140 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.807151 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:09:03.807162 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:09:03.807172 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:09:03.807183 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:09:03.807194 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:09:03.807205 | orchestrator | 2026-02-15 07:09:03.807216 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-02-15 07:09:03.807226 | orchestrator | Sunday 15 February 2026 07:08:19 +0000 (0:00:01.968) 1:14:57.324 ******* 2026-02-15 07:09:03.807237 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.807248 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:09:03.807259 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:09:03.807269 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:09:03.807280 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:09:03.807313 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:09:03.807325 | orchestrator | 2026-02-15 07:09:03.807336 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-02-15 07:09:03.807347 | orchestrator | Sunday 15 February 2026 07:08:21 +0000 (0:00:02.018) 1:14:59.343 ******* 2026-02-15 07:09:03.807357 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.807368 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:09:03.807379 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:09:03.807389 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:09:03.807400 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:09:03.807410 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:09:03.807421 | orchestrator | 2026-02-15 07:09:03.807432 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-02-15 07:09:03.807443 | orchestrator | Sunday 15 February 2026 07:08:23 +0000 (0:00:02.298) 1:15:01.641 ******* 2026-02-15 07:09:03.807454 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.807464 | orchestrator | 2026-02-15 07:09:03.807475 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-02-15 07:09:03.807486 | orchestrator | Sunday 15 February 2026 07:08:26 +0000 (0:00:03.168) 1:15:04.809 ******* 2026-02-15 07:09:03.807498 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.807508 | orchestrator | 2026-02-15 07:09:03.807519 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-02-15 07:09:03.807530 | orchestrator | Sunday 15 February 2026 07:08:30 +0000 (0:00:03.497) 1:15:08.306 ******* 2026-02-15 07:09:03.807550 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.807561 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:09:03.807571 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:09:03.807582 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:09:03.807593 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:09:03.807603 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:09:03.807614 | orchestrator | 2026-02-15 07:09:03.807625 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-02-15 07:09:03.807636 | orchestrator | Sunday 15 February 2026 07:08:32 +0000 (0:00:02.672) 1:15:10.979 ******* 2026-02-15 07:09:03.807646 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.807657 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:09:03.807668 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:09:03.807678 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:09:03.807689 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:09:03.807699 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:09:03.807710 | orchestrator | 2026-02-15 07:09:03.807721 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-02-15 07:09:03.807748 | orchestrator | Sunday 15 February 2026 07:08:34 +0000 (0:00:02.110) 1:15:13.090 ******* 2026-02-15 07:09:03.807761 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-02-15 07:09:03.807773 | orchestrator | 2026-02-15 07:09:03.807784 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-02-15 07:09:03.807795 | orchestrator | Sunday 15 February 2026 07:08:37 +0000 (0:00:02.745) 1:15:15.836 ******* 2026-02-15 07:09:03.807806 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.807817 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:09:03.807828 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:09:03.807838 | orchestrator | ok: [testbed-node-3] 2026-02-15 07:09:03.807849 | orchestrator | ok: [testbed-node-4] 2026-02-15 07:09:03.807859 | orchestrator | ok: [testbed-node-5] 2026-02-15 07:09:03.807870 | orchestrator | 2026-02-15 07:09:03.807881 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-02-15 07:09:03.807892 | orchestrator | Sunday 15 February 2026 07:08:40 +0000 (0:00:03.006) 1:15:18.843 ******* 2026-02-15 07:09:03.807903 | orchestrator | changed: [testbed-node-3] 2026-02-15 07:09:03.807913 | orchestrator | changed: [testbed-node-4] 2026-02-15 07:09:03.807924 | orchestrator | changed: [testbed-node-5] 2026-02-15 07:09:03.807935 | orchestrator | changed: [testbed-node-1] 2026-02-15 07:09:03.807946 | orchestrator | changed: [testbed-node-0] 2026-02-15 07:09:03.807957 | orchestrator | changed: [testbed-node-2] 2026-02-15 07:09:03.807967 | orchestrator | 2026-02-15 07:09:03.807978 | orchestrator | PLAY [Complete upgrade] ******************************************************** 2026-02-15 07:09:03.807989 | orchestrator | 2026-02-15 07:09:03.808000 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 07:09:03.808011 | orchestrator | Sunday 15 February 2026 07:08:45 +0000 (0:00:04.676) 1:15:23.519 ******* 2026-02-15 07:09:03.808021 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.808032 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:09:03.808043 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:09:03.808053 | orchestrator | 2026-02-15 07:09:03.808064 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 07:09:03.808080 | orchestrator | Sunday 15 February 2026 07:08:47 +0000 (0:00:01.731) 1:15:25.251 ******* 2026-02-15 07:09:03.808091 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.808102 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:09:03.808112 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:09:03.808123 | orchestrator | 2026-02-15 07:09:03.808134 | orchestrator | TASK [Container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-15 07:09:03.808145 | orchestrator | Sunday 15 February 2026 07:08:48 +0000 (0:00:01.632) 1:15:26.883 ******* 2026-02-15 07:09:03.808156 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:03.808166 | orchestrator | 2026-02-15 07:09:03.808184 | orchestrator | TASK [Non container | disallow pre-reef OSDs and enable all new reef-only functionality] *** 2026-02-15 07:09:03.808195 | orchestrator | Sunday 15 February 2026 07:08:51 +0000 (0:00:02.339) 1:15:29.223 ******* 2026-02-15 07:09:03.808206 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.808217 | orchestrator | 2026-02-15 07:09:03.808228 | orchestrator | PLAY [Upgrade node-exporter] *************************************************** 2026-02-15 07:09:03.808239 | orchestrator | 2026-02-15 07:09:03.808250 | orchestrator | TASK [Stop node-exporter] ****************************************************** 2026-02-15 07:09:03.808260 | orchestrator | Sunday 15 February 2026 07:08:53 +0000 (0:00:01.991) 1:15:31.215 ******* 2026-02-15 07:09:03.808271 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.808282 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:03.808312 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:03.808323 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:09:03.808334 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:09:03.808344 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:09:03.808355 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:03.808366 | orchestrator | 2026-02-15 07:09:03.808376 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 07:09:03.808387 | orchestrator | Sunday 15 February 2026 07:08:55 +0000 (0:00:02.258) 1:15:33.473 ******* 2026-02-15 07:09:03.808398 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.808409 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:03.808420 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:03.808430 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:09:03.808441 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:09:03.808452 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:09:03.808463 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:03.808473 | orchestrator | 2026-02-15 07:09:03.808484 | orchestrator | TASK [ceph-container-engine : Include pre_requisites/prerequisites.yml] ******** 2026-02-15 07:09:03.808495 | orchestrator | Sunday 15 February 2026 07:08:57 +0000 (0:00:02.404) 1:15:35.878 ******* 2026-02-15 07:09:03.808506 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.808517 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:03.808528 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:03.808538 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:09:03.808549 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:09:03.808560 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:09:03.808571 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:03.808581 | orchestrator | 2026-02-15 07:09:03.808593 | orchestrator | TASK [ceph-container-common : Container registry authentication] *************** 2026-02-15 07:09:03.808612 | orchestrator | Sunday 15 February 2026 07:09:00 +0000 (0:00:02.513) 1:15:38.391 ******* 2026-02-15 07:09:03.808632 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.808661 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:03.808682 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:03.808702 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:09:03.808720 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:09:03.808738 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:09:03.808755 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:03.808772 | orchestrator | 2026-02-15 07:09:03.808790 | orchestrator | TASK [ceph-node-exporter : Include setup_container.yml] ************************ 2026-02-15 07:09:03.808809 | orchestrator | Sunday 15 February 2026 07:09:02 +0000 (0:00:02.550) 1:15:40.942 ******* 2026-02-15 07:09:03.808828 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:03.808848 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:03.808867 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:03.808900 | orchestrator | skipping: [testbed-node-3] 2026-02-15 07:09:53.651091 | orchestrator | skipping: [testbed-node-4] 2026-02-15 07:09:53.651195 | orchestrator | skipping: [testbed-node-5] 2026-02-15 07:09:53.651207 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651216 | orchestrator | 2026-02-15 07:09:53.651225 | orchestrator | PLAY [Upgrade monitoring node] ************************************************* 2026-02-15 07:09:53.651256 | orchestrator | 2026-02-15 07:09:53.651265 | orchestrator | TASK [Stop monitoring services] ************************************************ 2026-02-15 07:09:53.651273 | orchestrator | Sunday 15 February 2026 07:09:05 +0000 (0:00:03.120) 1:15:44.062 ******* 2026-02-15 07:09:53.651281 | orchestrator | skipping: [testbed-manager] => (item=alertmanager)  2026-02-15 07:09:53.651289 | orchestrator | skipping: [testbed-manager] => (item=prometheus)  2026-02-15 07:09:53.651298 | orchestrator | skipping: [testbed-manager] => (item=grafana-server)  2026-02-15 07:09:53.651306 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651314 | orchestrator | 2026-02-15 07:09:53.651370 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-15 07:09:53.651380 | orchestrator | Sunday 15 February 2026 07:09:07 +0000 (0:00:01.249) 1:15:45.312 ******* 2026-02-15 07:09:53.651396 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651404 | orchestrator | 2026-02-15 07:09:53.651413 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-15 07:09:53.651421 | orchestrator | Sunday 15 February 2026 07:09:08 +0000 (0:00:01.125) 1:15:46.438 ******* 2026-02-15 07:09:53.651428 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651436 | orchestrator | 2026-02-15 07:09:53.651444 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-15 07:09:53.651452 | orchestrator | Sunday 15 February 2026 07:09:09 +0000 (0:00:01.210) 1:15:47.649 ******* 2026-02-15 07:09:53.651460 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651468 | orchestrator | 2026-02-15 07:09:53.651476 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-15 07:09:53.651484 | orchestrator | Sunday 15 February 2026 07:09:10 +0000 (0:00:01.199) 1:15:48.848 ******* 2026-02-15 07:09:53.651492 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651500 | orchestrator | 2026-02-15 07:09:53.651521 | orchestrator | TASK [ceph-prometheus : Create prometheus directories] ************************* 2026-02-15 07:09:53.651529 | orchestrator | Sunday 15 February 2026 07:09:11 +0000 (0:00:01.134) 1:15:49.983 ******* 2026-02-15 07:09:53.651537 | orchestrator | skipping: [testbed-manager] => (item=/etc/prometheus)  2026-02-15 07:09:53.651545 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/prometheus)  2026-02-15 07:09:53.651553 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651560 | orchestrator | 2026-02-15 07:09:53.651568 | orchestrator | TASK [ceph-prometheus : Write prometheus config file] ************************** 2026-02-15 07:09:53.651576 | orchestrator | Sunday 15 February 2026 07:09:13 +0000 (0:00:01.121) 1:15:51.105 ******* 2026-02-15 07:09:53.651584 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651591 | orchestrator | 2026-02-15 07:09:53.651599 | orchestrator | TASK [ceph-prometheus : Make sure the alerting rules directory exists] ********* 2026-02-15 07:09:53.651607 | orchestrator | Sunday 15 February 2026 07:09:14 +0000 (0:00:01.162) 1:15:52.267 ******* 2026-02-15 07:09:53.651614 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651622 | orchestrator | 2026-02-15 07:09:53.651630 | orchestrator | TASK [ceph-prometheus : Copy alerting rules] *********************************** 2026-02-15 07:09:53.651638 | orchestrator | Sunday 15 February 2026 07:09:15 +0000 (0:00:01.111) 1:15:53.379 ******* 2026-02-15 07:09:53.651647 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651656 | orchestrator | 2026-02-15 07:09:53.651665 | orchestrator | TASK [ceph-prometheus : Create alertmanager directories] *********************** 2026-02-15 07:09:53.651674 | orchestrator | Sunday 15 February 2026 07:09:16 +0000 (0:00:01.242) 1:15:54.621 ******* 2026-02-15 07:09:53.651683 | orchestrator | skipping: [testbed-manager] => (item=/etc/alertmanager)  2026-02-15 07:09:53.651692 | orchestrator | skipping: [testbed-manager] => (item=/var/lib/alertmanager)  2026-02-15 07:09:53.651701 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651710 | orchestrator | 2026-02-15 07:09:53.651720 | orchestrator | TASK [ceph-prometheus : Write alertmanager config file] ************************ 2026-02-15 07:09:53.651728 | orchestrator | Sunday 15 February 2026 07:09:17 +0000 (0:00:01.182) 1:15:55.803 ******* 2026-02-15 07:09:53.651743 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651753 | orchestrator | 2026-02-15 07:09:53.651762 | orchestrator | TASK [ceph-prometheus : Include setup_container.yml] *************************** 2026-02-15 07:09:53.651772 | orchestrator | Sunday 15 February 2026 07:09:18 +0000 (0:00:01.145) 1:15:56.948 ******* 2026-02-15 07:09:53.651780 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651789 | orchestrator | 2026-02-15 07:09:53.651798 | orchestrator | TASK [ceph-grafana : Include setup_container.yml] ****************************** 2026-02-15 07:09:53.651807 | orchestrator | Sunday 15 February 2026 07:09:20 +0000 (0:00:01.225) 1:15:58.174 ******* 2026-02-15 07:09:53.651815 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651823 | orchestrator | 2026-02-15 07:09:53.651830 | orchestrator | TASK [ceph-grafana : Include configure_grafana.yml] **************************** 2026-02-15 07:09:53.651838 | orchestrator | Sunday 15 February 2026 07:09:21 +0000 (0:00:01.158) 1:15:59.333 ******* 2026-02-15 07:09:53.651846 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:09:53.651854 | orchestrator | 2026-02-15 07:09:53.651861 | orchestrator | PLAY [Upgrade ceph dashboard] ************************************************** 2026-02-15 07:09:53.651869 | orchestrator | 2026-02-15 07:09:53.651877 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-02-15 07:09:53.651884 | orchestrator | Sunday 15 February 2026 07:09:22 +0000 (0:00:01.636) 1:16:00.969 ******* 2026-02-15 07:09:53.651892 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:53.651900 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:53.651908 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:53.651916 | orchestrator | 2026-02-15 07:09:53.651923 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv4] ************************ 2026-02-15 07:09:53.651931 | orchestrator | Sunday 15 February 2026 07:09:24 +0000 (0:00:01.941) 1:16:02.910 ******* 2026-02-15 07:09:53.651939 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:53.651947 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:53.651970 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:53.651978 | orchestrator | 2026-02-15 07:09:53.651986 | orchestrator | TASK [ceph-facts : Set grafana_server_addr fact - ipv6] ************************ 2026-02-15 07:09:53.651994 | orchestrator | Sunday 15 February 2026 07:09:26 +0000 (0:00:01.449) 1:16:04.360 ******* 2026-02-15 07:09:53.652002 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:53.652010 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:53.652018 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:53.652025 | orchestrator | 2026-02-15 07:09:53.652033 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv4] *********************** 2026-02-15 07:09:53.652041 | orchestrator | Sunday 15 February 2026 07:09:27 +0000 (0:00:01.378) 1:16:05.739 ******* 2026-02-15 07:09:53.652049 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:53.652057 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:53.652065 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:53.652073 | orchestrator | 2026-02-15 07:09:53.652080 | orchestrator | TASK [ceph-facts : Set grafana_server_addrs fact - ipv6] *********************** 2026-02-15 07:09:53.652088 | orchestrator | Sunday 15 February 2026 07:09:29 +0000 (0:00:01.674) 1:16:07.414 ******* 2026-02-15 07:09:53.652096 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:53.652104 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:53.652112 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:53.652120 | orchestrator | 2026-02-15 07:09:53.652127 | orchestrator | TASK [ceph-dashboard : Include configure_dashboard.yml] ************************ 2026-02-15 07:09:53.652135 | orchestrator | Sunday 15 February 2026 07:09:30 +0000 (0:00:01.654) 1:16:09.069 ******* 2026-02-15 07:09:53.652143 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:53.652151 | orchestrator | skipping: [testbed-node-1] 2026-02-15 07:09:53.652159 | orchestrator | skipping: [testbed-node-2] 2026-02-15 07:09:53.652167 | orchestrator | 2026-02-15 07:09:53.652175 | orchestrator | TASK [ceph-dashboard : Print dashboard URL] ************************************ 2026-02-15 07:09:53.652188 | orchestrator | Sunday 15 February 2026 07:09:32 +0000 (0:00:01.503) 1:16:10.572 ******* 2026-02-15 07:09:53.652196 | orchestrator | skipping: [testbed-node-0] 2026-02-15 07:09:53.652204 | orchestrator | 2026-02-15 07:09:53.652212 | orchestrator | PLAY [Switch any existing crush buckets to straw2] ***************************** 2026-02-15 07:09:53.652220 | orchestrator | 2026-02-15 07:09:53.652232 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-02-15 07:09:53.652240 | orchestrator | Sunday 15 February 2026 07:09:34 +0000 (0:00:02.016) 1:16:12.589 ******* 2026-02-15 07:09:53.652248 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:53.652256 | orchestrator | 2026-02-15 07:09:53.652264 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-02-15 07:09:53.652272 | orchestrator | Sunday 15 February 2026 07:09:35 +0000 (0:00:01.470) 1:16:14.059 ******* 2026-02-15 07:09:53.652279 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:53.652287 | orchestrator | 2026-02-15 07:09:53.652295 | orchestrator | TASK [Set_fact ceph_cmd] ******************************************************* 2026-02-15 07:09:53.652303 | orchestrator | Sunday 15 February 2026 07:09:37 +0000 (0:00:01.134) 1:16:15.194 ******* 2026-02-15 07:09:53.652310 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:53.652318 | orchestrator | 2026-02-15 07:09:53.652344 | orchestrator | TASK [Backup the crushmap] ***************************************************** 2026-02-15 07:09:53.652353 | orchestrator | Sunday 15 February 2026 07:09:38 +0000 (0:00:01.134) 1:16:16.328 ******* 2026-02-15 07:09:53.652360 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:53.652368 | orchestrator | 2026-02-15 07:09:53.652376 | orchestrator | TASK [Switch crush buckets to straw2] ****************************************** 2026-02-15 07:09:53.652384 | orchestrator | Sunday 15 February 2026 07:09:41 +0000 (0:00:02.920) 1:16:19.249 ******* 2026-02-15 07:09:53.652392 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:53.652400 | orchestrator | 2026-02-15 07:09:53.652407 | orchestrator | TASK [Remove crushmap backup] ************************************************** 2026-02-15 07:09:53.652415 | orchestrator | Sunday 15 February 2026 07:09:44 +0000 (0:00:03.052) 1:16:22.302 ******* 2026-02-15 07:09:53.652423 | orchestrator | changed: [testbed-node-0] 2026-02-15 07:09:53.652431 | orchestrator | 2026-02-15 07:09:53.652439 | orchestrator | PLAY [Show ceph status] ******************************************************** 2026-02-15 07:09:53.652447 | orchestrator | 2026-02-15 07:09:53.652455 | orchestrator | TASK [Set_fact container_exec_cmd_status] ************************************** 2026-02-15 07:09:53.652463 | orchestrator | Sunday 15 February 2026 07:09:46 +0000 (0:00:01.818) 1:16:24.120 ******* 2026-02-15 07:09:53.652471 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:53.652479 | orchestrator | ok: [testbed-node-1] 2026-02-15 07:09:53.652486 | orchestrator | ok: [testbed-node-2] 2026-02-15 07:09:53.652494 | orchestrator | 2026-02-15 07:09:53.652502 | orchestrator | TASK [Show ceph status] ******************************************************** 2026-02-15 07:09:53.652510 | orchestrator | Sunday 15 February 2026 07:09:47 +0000 (0:00:01.846) 1:16:25.966 ******* 2026-02-15 07:09:53.652518 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:53.652526 | orchestrator | 2026-02-15 07:09:53.652534 | orchestrator | TASK [Show all daemons version] ************************************************ 2026-02-15 07:09:53.652542 | orchestrator | Sunday 15 February 2026 07:09:50 +0000 (0:00:02.353) 1:16:28.320 ******* 2026-02-15 07:09:53.652550 | orchestrator | ok: [testbed-node-0] 2026-02-15 07:09:53.652557 | orchestrator | 2026-02-15 07:09:53.652565 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 07:09:53.652574 | orchestrator | localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-02-15 07:09:53.652583 | orchestrator | testbed-manager : ok=25  changed=1  unreachable=0 failed=0 skipped=76  rescued=0 ignored=0 2026-02-15 07:09:53.652592 | orchestrator | testbed-node-0 : ok=248  changed=20  unreachable=0 failed=0 skipped=376  rescued=0 ignored=0 2026-02-15 07:09:53.652600 | orchestrator | testbed-node-1 : ok=191  changed=15  unreachable=0 failed=0 skipped=350  rescued=0 ignored=0 2026-02-15 07:09:53.652620 | orchestrator | testbed-node-2 : ok=196  changed=16  unreachable=0 failed=0 skipped=351  rescued=0 ignored=0 2026-02-15 07:09:54.420269 | orchestrator | testbed-node-3 : ok=311  changed=22  unreachable=0 failed=0 skipped=348  rescued=0 ignored=0 2026-02-15 07:09:54.420450 | orchestrator | testbed-node-4 : ok=308  changed=17  unreachable=0 failed=0 skipped=359  rescued=0 ignored=0 2026-02-15 07:09:54.420471 | orchestrator | testbed-node-5 : ok=308  changed=18  unreachable=0 failed=0 skipped=358  rescued=0 ignored=0 2026-02-15 07:09:54.420484 | orchestrator | 2026-02-15 07:09:54.420496 | orchestrator | 2026-02-15 07:09:54.420507 | orchestrator | 2026-02-15 07:09:54.420518 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 07:09:54.420530 | orchestrator | Sunday 15 February 2026 07:09:53 +0000 (0:00:03.403) 1:16:31.723 ******* 2026-02-15 07:09:54.420542 | orchestrator | =============================================================================== 2026-02-15 07:09:54.420553 | orchestrator | Disable pg autoscale on pools ------------------------------------------ 73.80s 2026-02-15 07:09:54.420563 | orchestrator | Re-enable pg autoscale on pools ---------------------------------------- 72.98s 2026-02-15 07:09:54.420574 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 32.22s 2026-02-15 07:09:54.420585 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.30s 2026-02-15 07:09:54.420596 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.95s 2026-02-15 07:09:54.420607 | orchestrator | Gather and delegate facts ---------------------------------------------- 30.88s 2026-02-15 07:09:54.420618 | orchestrator | Waiting for clean pgs... ----------------------------------------------- 30.05s 2026-02-15 07:09:54.420648 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 28.74s 2026-02-15 07:09:54.420660 | orchestrator | Stop ceph mgr ---------------------------------------------------------- 28.28s 2026-02-15 07:09:54.420670 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 23.04s 2026-02-15 07:09:54.420681 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 22.95s 2026-02-15 07:09:54.420692 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 22.16s 2026-02-15 07:09:54.420703 | orchestrator | Create potentially missing keys (rbd and rbd-mirror) ------------------- 16.69s 2026-02-15 07:09:54.420713 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 15.63s 2026-02-15 07:09:54.420724 | orchestrator | ceph-config : Set config to cluster ------------------------------------ 14.05s 2026-02-15 07:09:54.420735 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.60s 2026-02-15 07:09:54.420746 | orchestrator | ceph-config : Set osd_memory_target to cluster host config ------------- 12.41s 2026-02-15 07:09:54.420756 | orchestrator | Stop ceph osd ---------------------------------------------------------- 11.58s 2026-02-15 07:09:54.420767 | orchestrator | ceph-infra : Update cache for Debian based OSs ------------------------- 11.07s 2026-02-15 07:09:54.420778 | orchestrator | Restart active mds ----------------------------------------------------- 10.81s 2026-02-15 07:09:54.732838 | orchestrator | + osism apply cephclient 2026-02-15 07:09:56.864988 | orchestrator | 2026-02-15 07:09:56 | INFO  | Task 436db98b-cf51-4a5b-96ae-cc80a9ec7b36 (cephclient) was prepared for execution. 2026-02-15 07:09:56.865093 | orchestrator | 2026-02-15 07:09:56 | INFO  | It takes a moment until task 436db98b-cf51-4a5b-96ae-cc80a9ec7b36 (cephclient) has been started and output is visible here. 2026-02-15 07:10:25.658108 | orchestrator | 2026-02-15 07:10:25.658253 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-02-15 07:10:25.658303 | orchestrator | 2026-02-15 07:10:25.658317 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-02-15 07:10:25.658328 | orchestrator | Sunday 15 February 2026 07:10:03 +0000 (0:00:01.903) 0:00:01.903 ******* 2026-02-15 07:10:25.658396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-02-15 07:10:25.658413 | orchestrator | 2026-02-15 07:10:25.658424 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-02-15 07:10:25.658435 | orchestrator | Sunday 15 February 2026 07:10:05 +0000 (0:00:01.865) 0:00:03.769 ******* 2026-02-15 07:10:25.658446 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-02-15 07:10:25.658457 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient/data) 2026-02-15 07:10:25.658469 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-02-15 07:10:25.658480 | orchestrator | 2026-02-15 07:10:25.658491 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-02-15 07:10:25.658501 | orchestrator | Sunday 15 February 2026 07:10:08 +0000 (0:00:02.557) 0:00:06.326 ******* 2026-02-15 07:10:25.658513 | orchestrator | ok: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-02-15 07:10:25.658524 | orchestrator | 2026-02-15 07:10:25.658534 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-02-15 07:10:25.658545 | orchestrator | Sunday 15 February 2026 07:10:10 +0000 (0:00:02.102) 0:00:08.429 ******* 2026-02-15 07:10:25.658556 | orchestrator | ok: [testbed-manager] 2026-02-15 07:10:25.658567 | orchestrator | 2026-02-15 07:10:25.658578 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-02-15 07:10:25.658591 | orchestrator | Sunday 15 February 2026 07:10:12 +0000 (0:00:01.895) 0:00:10.324 ******* 2026-02-15 07:10:25.658603 | orchestrator | ok: [testbed-manager] 2026-02-15 07:10:25.658614 | orchestrator | 2026-02-15 07:10:25.658627 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-02-15 07:10:25.658639 | orchestrator | Sunday 15 February 2026 07:10:14 +0000 (0:00:01.878) 0:00:12.203 ******* 2026-02-15 07:10:25.658651 | orchestrator | ok: [testbed-manager] 2026-02-15 07:10:25.658662 | orchestrator | 2026-02-15 07:10:25.658673 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-02-15 07:10:25.658683 | orchestrator | Sunday 15 February 2026 07:10:16 +0000 (0:00:02.066) 0:00:14.270 ******* 2026-02-15 07:10:25.658694 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-02-15 07:10:25.658705 | orchestrator | ok: [testbed-manager] => (item=ceph-authtool) 2026-02-15 07:10:25.658717 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-02-15 07:10:25.658728 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-02-15 07:10:25.658739 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-02-15 07:10:25.658749 | orchestrator | 2026-02-15 07:10:25.658760 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-02-15 07:10:25.658771 | orchestrator | Sunday 15 February 2026 07:10:21 +0000 (0:00:04.998) 0:00:19.269 ******* 2026-02-15 07:10:25.658781 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-02-15 07:10:25.658792 | orchestrator | 2026-02-15 07:10:25.658803 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-02-15 07:10:25.658813 | orchestrator | Sunday 15 February 2026 07:10:22 +0000 (0:00:01.495) 0:00:20.764 ******* 2026-02-15 07:10:25.658824 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:10:25.658835 | orchestrator | 2026-02-15 07:10:25.658845 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-02-15 07:10:25.658856 | orchestrator | Sunday 15 February 2026 07:10:23 +0000 (0:00:01.121) 0:00:21.886 ******* 2026-02-15 07:10:25.658867 | orchestrator | skipping: [testbed-manager] 2026-02-15 07:10:25.658877 | orchestrator | 2026-02-15 07:10:25.658888 | orchestrator | PLAY RECAP ********************************************************************* 2026-02-15 07:10:25.658913 | orchestrator | testbed-manager : ok=8  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-02-15 07:10:25.658934 | orchestrator | 2026-02-15 07:10:25.658945 | orchestrator | 2026-02-15 07:10:25.658956 | orchestrator | TASKS RECAP ******************************************************************** 2026-02-15 07:10:25.658967 | orchestrator | Sunday 15 February 2026 07:10:25 +0000 (0:00:01.506) 0:00:23.393 ******* 2026-02-15 07:10:25.658977 | orchestrator | =============================================================================== 2026-02-15 07:10:25.658988 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 5.00s 2026-02-15 07:10:25.658998 | orchestrator | osism.services.cephclient : Create required directories ----------------- 2.56s 2026-02-15 07:10:25.659009 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 2.10s 2026-02-15 07:10:25.659020 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------- 2.07s 2026-02-15 07:10:25.659030 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.90s 2026-02-15 07:10:25.659041 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.88s 2026-02-15 07:10:25.659052 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 1.87s 2026-02-15 07:10:25.659063 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 1.51s 2026-02-15 07:10:25.659074 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 1.50s 2026-02-15 07:10:25.659084 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 1.12s 2026-02-15 07:10:25.981084 | orchestrator | + [[ false == \f\a\l\s\e ]] 2026-02-15 07:10:25.981180 | orchestrator | + sh -c /opt/configuration/scripts/upgrade/300-openstack.sh 2026-02-15 07:10:25.991655 | orchestrator | + set -e 2026-02-15 07:10:25.991731 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-02-15 07:10:25.991751 | orchestrator | ++ export INTERACTIVE=false 2026-02-15 07:10:25.991763 | orchestrator | ++ INTERACTIVE=false 2026-02-15 07:10:25.991774 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-02-15 07:10:25.991784 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-02-15 07:10:25.991795 | orchestrator | + source /opt/manager-vars.sh 2026-02-15 07:10:25.991805 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-02-15 07:10:25.991816 | orchestrator | ++ NUMBER_OF_NODES=6 2026-02-15 07:10:25.991826 | orchestrator | ++ export CEPH_VERSION=reef 2026-02-15 07:10:25.991837 | orchestrator | ++ CEPH_VERSION=reef 2026-02-15 07:10:25.991848 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-02-15 07:10:25.991859 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-02-15 07:10:25.991869 | orchestrator | ++ export MANAGER_VERSION=9.5.0 2026-02-15 07:10:25.991880 | orchestrator | ++ MANAGER_VERSION=9.5.0 2026-02-15 07:10:25.991891 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-02-15 07:10:25.991901 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-02-15 07:10:25.991912 | orchestrator | ++ export ARA=false 2026-02-15 07:10:25.991923 | orchestrator | ++ ARA=false 2026-02-15 07:10:25.991934 | orchestrator | ++ export DEPLOY_MODE=manager 2026-02-15 07:10:25.991944 | orchestrator | ++ DEPLOY_MODE=manager 2026-02-15 07:10:25.991955 | orchestrator | ++ export TEMPEST=false 2026-02-15 07:10:25.991965 | orchestrator | ++ TEMPEST=false 2026-02-15 07:10:25.991975 | orchestrator | ++ export IS_ZUUL=true 2026-02-15 07:10:25.991986 | orchestrator | ++ IS_ZUUL=true 2026-02-15 07:10:25.991997 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 07:10:25.992008 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.145 2026-02-15 07:10:25.992018 | orchestrator | ++ export EXTERNAL_API=false 2026-02-15 07:10:25.992040 | orchestrator | ++ EXTERNAL_API=false 2026-02-15 07:10:25.992051 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-02-15 07:10:25.992061 | orchestrator | ++ IMAGE_USER=ubuntu 2026-02-15 07:10:25.992072 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-02-15 07:10:25.992082 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-02-15 07:10:25.992093 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-02-15 07:10:25.992104 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-02-15 07:10:25.992114 | orchestrator | ++ export RABBITMQ3TO4=true 2026-02-15 07:10:25.992125 | orchestrator | ++ RABBITMQ3TO4=true 2026-02-15 07:10:25.992136 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2026-02-15 07:10:25.993380 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2026-02-15 07:10:25.999544 | orchestrator | ++ export MANAGER_VERSION=10.0.0-rc.1 2026-02-15 07:10:25.999627 | orchestrator | ++ MANAGER_VERSION=10.0.0-rc.1 2026-02-15 07:10:25.999640 | orchestrator | + [[ true == \t\r\u\e ]] 2026-02-15 07:10:25.999652 | orchestrator | + osism migrate rabbitmq3to4 prepare 2026-02-15 07:10:48.445607 | orchestrator | 2026-02-15 07:10:48 | ERROR  | Unable to get ansible vault password 2026-02-15 07:10:48.445725 | orchestrator | 2026-02-15 07:10:48 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-15 07:10:48.445742 | orchestrator | 2026-02-15 07:10:48 | ERROR  | Dropping encrypted entries 2026-02-15 07:10:48.482914 | orchestrator | 2026-02-15 07:10:48 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-15 07:10:48.483982 | orchestrator | 2026-02-15 07:10:48 | INFO  | Kolla configuration check passed 2026-02-15 07:10:48.693912 | orchestrator | 2026-02-15 07:10:48 | INFO  | Created vhost 'openstack' with default_queue_type=quorum 2026-02-15 07:10:48.710771 | orchestrator | 2026-02-15 07:10:48 | INFO  | Set permissions for user 'openstack' on vhost 'openstack' 2026-02-15 07:10:49.040242 | orchestrator | + osism migrate rabbitmq3to4 list 2026-02-15 07:11:10.097175 | orchestrator | 2026-02-15 07:11:10 | ERROR  | Unable to get ansible vault password 2026-02-15 07:11:10.097256 | orchestrator | 2026-02-15 07:11:10 | ERROR  | Unable to get vault secret: [Errno 2] No such file or directory: '/share/ansible_vault_password.key' 2026-02-15 07:11:10.097264 | orchestrator | 2026-02-15 07:11:10 | ERROR  | Dropping encrypted entries 2026-02-15 07:11:10.142425 | orchestrator | 2026-02-15 07:11:10 | INFO  | Connecting to RabbitMQ Management API at 192.168.16.10:15672 (node: testbed-node-0) as openstack... 2026-02-15 07:11:10.299907 | orchestrator | 2026-02-15 07:11:10 | INFO  | Found 208 classic queue(s) in vhost '/': 2026-02-15 07:11:10.300049 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - alarm.all.sample (vhost: /, messages: 0) 2026-02-15 07:11:10.300074 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - alarming.sample (vhost: /, messages: 0) 2026-02-15 07:11:10.300093 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - barbican.workers (vhost: /, messages: 0) 2026-02-15 07:11:10.300575 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - barbican.workers.barbican.queue (vhost: /, messages: 0) 2026-02-15 07:11:10.300612 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - barbican.workers_fanout_0a4ea5bee4b444a2817eda12355e8e8e (vhost: /, messages: 0) 2026-02-15 07:11:10.300632 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - barbican.workers_fanout_71a0c5b9da964d1b8dea06fed03ccb76 (vhost: /, messages: 0) 2026-02-15 07:11:10.300650 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - barbican.workers_fanout_926746eec34f4a6a857c7c3eaa3daaf7 (vhost: /, messages: 0) 2026-02-15 07:11:10.300669 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - barbican_notifications.info (vhost: /, messages: 0) 2026-02-15 07:11:10.301439 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - central (vhost: /, messages: 0) 2026-02-15 07:11:10.301773 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - central.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.301799 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - central.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.302662 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - central.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.302765 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - central_fanout_1d4bf68e120a4a5696b81fcac082ab38 (vhost: /, messages: 0) 2026-02-15 07:11:10.302783 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - central_fanout_3bffe503a98649c6a3e1acf2ce7136c8 (vhost: /, messages: 0) 2026-02-15 07:11:10.302823 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - central_fanout_48e08052100848c99981a0460640d8a7 (vhost: /, messages: 0) 2026-02-15 07:11:10.302836 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - central_fanout_70a66a672dd04b088e8c02e39e927445 (vhost: /, messages: 0) 2026-02-15 07:11:10.302847 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - central_fanout_b71f94c9f0a744fd96802086d546de7b (vhost: /, messages: 0) 2026-02-15 07:11:10.302858 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - central_fanout_eb8c8c405643471c8cd7890bea1fd46a (vhost: /, messages: 0) 2026-02-15 07:11:10.303554 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-backup (vhost: /, messages: 0) 2026-02-15 07:11:10.303585 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-backup.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.303597 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-backup.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.303608 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-backup.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.303619 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-backup_fanout_4b8ab3a61c044226996dcc3ee9b4592d (vhost: /, messages: 0) 2026-02-15 07:11:10.303631 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-backup_fanout_b5035239a54047b79fd8a7d87ff6e92d (vhost: /, messages: 0) 2026-02-15 07:11:10.303642 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-backup_fanout_fea4e71fefca4831940d5f07f245ff88 (vhost: /, messages: 0) 2026-02-15 07:11:10.303653 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-scheduler (vhost: /, messages: 0) 2026-02-15 07:11:10.304287 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.304422 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.304442 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.304462 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-scheduler_fanout_1ca860f2c5bc4ca88acfa1eea8d30209 (vhost: /, messages: 0) 2026-02-15 07:11:10.304481 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-scheduler_fanout_22277ba858ab4215b8b0ace1acf55beb (vhost: /, messages: 0) 2026-02-15 07:11:10.304500 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-scheduler_fanout_995602b029b14a68934dcb1850679b7f (vhost: /, messages: 0) 2026-02-15 07:11:10.304628 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume (vhost: /, messages: 0) 2026-02-15 07:11:10.304666 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes (vhost: /, messages: 0) 2026-02-15 07:11:10.304678 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.304689 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume.testbed-node-0@rbd-volumes_fanout_d7f2d29ce9e749f8b2cc546715debcc9 (vhost: /, messages: 0) 2026-02-15 07:11:10.304701 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes (vhost: /, messages: 0) 2026-02-15 07:11:10.304793 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.304813 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume.testbed-node-1@rbd-volumes_fanout_60f2d9927e254f04a615c76436d3f870 (vhost: /, messages: 0) 2026-02-15 07:11:10.304833 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes (vhost: /, messages: 0) 2026-02-15 07:11:10.305447 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.305618 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume.testbed-node-2@rbd-volumes_fanout_29746cfce7d041a7a6281cd061325fa5 (vhost: /, messages: 0) 2026-02-15 07:11:10.305641 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume_fanout_1e5c92f1e647499d8033791db153a2e2 (vhost: /, messages: 0) 2026-02-15 07:11:10.305788 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume_fanout_4a73f2c270a24868a71d0721beda8371 (vhost: /, messages: 0) 2026-02-15 07:11:10.305802 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - cinder-volume_fanout_b339b00af21141428dffe4c1d3cab12d (vhost: /, messages: 0) 2026-02-15 07:11:10.305838 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - compute (vhost: /, messages: 0) 2026-02-15 07:11:10.305852 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - compute.testbed-node-3 (vhost: /, messages: 0) 2026-02-15 07:11:10.305922 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - compute.testbed-node-4 (vhost: /, messages: 0) 2026-02-15 07:11:10.305937 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - compute.testbed-node-5 (vhost: /, messages: 0) 2026-02-15 07:11:10.305957 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - compute_fanout_7cadb8192e454234bda3fe381e3032de (vhost: /, messages: 0) 2026-02-15 07:11:10.305977 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - compute_fanout_aa8c5b955a954f39b4f6ff3615fa9cf3 (vhost: /, messages: 0) 2026-02-15 07:11:10.305998 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - compute_fanout_c333769e59734d7e870d5378457cd834 (vhost: /, messages: 0) 2026-02-15 07:11:10.306220 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - conductor (vhost: /, messages: 0) 2026-02-15 07:11:10.306253 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - conductor.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.306656 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - conductor.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.306682 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - conductor.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.306694 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - conductor_fanout_03c9086fccce473383e7b94c0b9f74db (vhost: /, messages: 0) 2026-02-15 07:11:10.306705 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - conductor_fanout_16d4bd70da2e4b57af0ada2c203eca8d (vhost: /, messages: 0) 2026-02-15 07:11:10.307069 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - conductor_fanout_44bf9c92b3bf425c8bf3fb3ccb108782 (vhost: /, messages: 0) 2026-02-15 07:11:10.307095 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - conductor_fanout_7821469cdf954425a67b5ee1ff21f772 (vhost: /, messages: 0) 2026-02-15 07:11:10.307554 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - conductor_fanout_ef08ef0941204a0c92a694dc68598f71 (vhost: /, messages: 0) 2026-02-15 07:11:10.307587 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - conductor_fanout_f5104fed9b15432eb7c1d6ffdf931138 (vhost: /, messages: 0) 2026-02-15 07:11:10.307606 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - event.sample (vhost: /, messages: 9) 2026-02-15 07:11:10.307621 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor (vhost: /, messages: 0) 2026-02-15 07:11:10.307772 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor.ewqeds4prchy (vhost: /, messages: 0) 2026-02-15 07:11:10.307803 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor.n2or4gom3p6g (vhost: /, messages: 0) 2026-02-15 07:11:10.307829 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor.smujstzyq3sn (vhost: /, messages: 0) 2026-02-15 07:11:10.307944 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor_fanout_0b57979afc224ed09097a9205e42fc0f (vhost: /, messages: 0) 2026-02-15 07:11:10.308300 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor_fanout_429ea328890b43138758b86ae34667b2 (vhost: /, messages: 0) 2026-02-15 07:11:10.308331 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor_fanout_47f031b668e344618c42a413c49c534d (vhost: /, messages: 0) 2026-02-15 07:11:10.308351 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor_fanout_5d0f84aeabf1450cb4a913c0b0551dc5 (vhost: /, messages: 0) 2026-02-15 07:11:10.308394 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor_fanout_67ae445a3d864c1fa3cf3be19f199054 (vhost: /, messages: 0) 2026-02-15 07:11:10.308587 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor_fanout_7fbc2459e5924e7a918234fda6e347de (vhost: /, messages: 0) 2026-02-15 07:11:10.309023 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor_fanout_8aa8b4aa10ea4609a5dc3247b1c2e63c (vhost: /, messages: 0) 2026-02-15 07:11:10.309064 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor_fanout_dcc21d580f194dfcaa3ae22b3ccca572 (vhost: /, messages: 0) 2026-02-15 07:11:10.309084 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - magnum-conductor_fanout_e49f1882bf414d0eb0d24612add84b26 (vhost: /, messages: 0) 2026-02-15 07:11:10.309578 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-data (vhost: /, messages: 0) 2026-02-15 07:11:10.309607 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-data.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.309834 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-data.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.309860 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-data.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.309877 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-data_fanout_a45171df3a3e4db7a9f2d63a511b0fb7 (vhost: /, messages: 0) 2026-02-15 07:11:10.310493 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-data_fanout_dd2d7f51681842df8a949a9e2550bf7a (vhost: /, messages: 0) 2026-02-15 07:11:10.310520 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-data_fanout_f8b8a6f187ad4f75b17ded8400a59d5b (vhost: /, messages: 0) 2026-02-15 07:11:10.310530 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-scheduler (vhost: /, messages: 0) 2026-02-15 07:11:10.310540 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.310557 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.310747 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.310768 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-scheduler_fanout_143d4e3f052040febbbc5d9a31dd6d78 (vhost: /, messages: 0) 2026-02-15 07:11:10.311619 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-scheduler_fanout_55cf613309674f008a05776ed8c22c10 (vhost: /, messages: 0) 2026-02-15 07:11:10.311670 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-scheduler_fanout_ad4fb4d932224a7a84595a11ddd68d6d (vhost: /, messages: 0) 2026-02-15 07:11:10.311681 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-share (vhost: /, messages: 0) 2026-02-15 07:11:10.311691 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-share.testbed-node-0@cephfsnative1 (vhost: /, messages: 0) 2026-02-15 07:11:10.311715 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-share.testbed-node-1@cephfsnative1 (vhost: /, messages: 0) 2026-02-15 07:11:10.311725 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-share.testbed-node-2@cephfsnative1 (vhost: /, messages: 0) 2026-02-15 07:11:10.311734 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-share_fanout_16e5fec8cee64ff381a5d69817c56183 (vhost: /, messages: 0) 2026-02-15 07:11:10.311744 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-share_fanout_2b6979950ade4cca9dc5cb3a3652e40e (vhost: /, messages: 0) 2026-02-15 07:11:10.311992 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - manila-share_fanout_717e3c11c8ac46bdbe7736d84083a9ad (vhost: /, messages: 0) 2026-02-15 07:11:10.312021 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - notifications.audit (vhost: /, messages: 0) 2026-02-15 07:11:10.313569 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - notifications.critical (vhost: /, messages: 0) 2026-02-15 07:11:10.313590 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - notifications.debug (vhost: /, messages: 0) 2026-02-15 07:11:10.313600 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - notifications.error (vhost: /, messages: 0) 2026-02-15 07:11:10.313610 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - notifications.info (vhost: /, messages: 0) 2026-02-15 07:11:10.313619 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - notifications.sample (vhost: /, messages: 0) 2026-02-15 07:11:10.313629 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - notifications.warn (vhost: /, messages: 0) 2026-02-15 07:11:10.313639 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - octavia_provisioning_v2 (vhost: /, messages: 0) 2026-02-15 07:11:10.313649 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - octavia_provisioning_v2.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.313658 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - octavia_provisioning_v2.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.313668 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - octavia_provisioning_v2.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.314143 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - octavia_provisioning_v2_fanout_12e5c47224a0481da728c98c840f3e2d (vhost: /, messages: 0) 2026-02-15 07:11:10.314167 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - octavia_provisioning_v2_fanout_96cbaa5a29f647a5a3fd72778836211e (vhost: /, messages: 0) 2026-02-15 07:11:10.314178 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - octavia_provisioning_v2_fanout_c9a21b4ae9d5446e95329a67971dc29a (vhost: /, messages: 0) 2026-02-15 07:11:10.314188 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - producer (vhost: /, messages: 0) 2026-02-15 07:11:10.314198 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - producer.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.314208 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - producer.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.314508 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - producer.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.314528 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - producer_fanout_2b2f9e0dbbc84b39abef4901d93e45d5 (vhost: /, messages: 0) 2026-02-15 07:11:10.314865 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - producer_fanout_565040353cc940a3839ced937f779d59 (vhost: /, messages: 0) 2026-02-15 07:11:10.314883 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - producer_fanout_9cf10ef9a9d8446583b093642a7be574 (vhost: /, messages: 0) 2026-02-15 07:11:10.314995 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - producer_fanout_c405653b77ba47ccbac8ab1a1e7d1ffe (vhost: /, messages: 0) 2026-02-15 07:11:10.315112 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - producer_fanout_c477384c816146ef8c69dd4d2e5ecd34 (vhost: /, messages: 0) 2026-02-15 07:11:10.315225 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - producer_fanout_ed79313e364349b3bd4c3625e3ec7c0f (vhost: /, messages: 0) 2026-02-15 07:11:10.315478 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin (vhost: /, messages: 0) 2026-02-15 07:11:10.315497 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.315680 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.316029 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.316047 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin_fanout_29f86511743f41b7a307f65604c70ef1 (vhost: /, messages: 0) 2026-02-15 07:11:10.316057 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin_fanout_5031db78f0174b4289f2b62aaeb9682e (vhost: /, messages: 0) 2026-02-15 07:11:10.316132 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin_fanout_6521ee3fec6045ea929781e77fd04338 (vhost: /, messages: 0) 2026-02-15 07:11:10.316429 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin_fanout_6e70b04974b249f9b5d2bdab0a6c06f1 (vhost: /, messages: 0) 2026-02-15 07:11:10.316447 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin_fanout_b7f97279f13a496c800c8609319e60b7 (vhost: /, messages: 0) 2026-02-15 07:11:10.316620 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin_fanout_d60ef5e958134cc2bf2ae1cc83322efe (vhost: /, messages: 0) 2026-02-15 07:11:10.316746 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin_fanout_e440ff94c2564b71b4b9c0b7bf077b1a (vhost: /, messages: 0) 2026-02-15 07:11:10.317111 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin_fanout_ebf1b6b9bc2f4a1981fac22e1cf886e6 (vhost: /, messages: 0) 2026-02-15 07:11:10.317137 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-plugin_fanout_f4efa5269e734cc28abc1f39391f10d6 (vhost: /, messages: 0) 2026-02-15 07:11:10.317147 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin (vhost: /, messages: 0) 2026-02-15 07:11:10.317418 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.317586 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.317601 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.317610 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_04ecea9e60c447de8b6e3578ca5fca86 (vhost: /, messages: 0) 2026-02-15 07:11:10.317940 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_1fb59652cecb4c66991bbf09246f572d (vhost: /, messages: 0) 2026-02-15 07:11:10.317975 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_222a5d84e40d4b7bbafb0af2fe215773 (vhost: /, messages: 0) 2026-02-15 07:11:10.318169 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_2d4b3f5f2a514310a2cb2f117016bfa2 (vhost: /, messages: 0) 2026-02-15 07:11:10.318189 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_2fb6622e03f841c0a0303d2f0d55197f (vhost: /, messages: 0) 2026-02-15 07:11:10.318335 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_33520c634e73488eb11f6cbef5514609 (vhost: /, messages: 0) 2026-02-15 07:11:10.318358 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_350409a1c8f249299207a2566ca9d53f (vhost: /, messages: 0) 2026-02-15 07:11:10.318655 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_395c4ed7ff5d4c04a2771f26ea7b65eb (vhost: /, messages: 0) 2026-02-15 07:11:10.318677 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_477d3332c2f44f4c88ac58207f3a3f81 (vhost: /, messages: 0) 2026-02-15 07:11:10.318687 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_59b30152d12347efb00d3fb27a087a2e (vhost: /, messages: 0) 2026-02-15 07:11:10.318767 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_6b423f0eb5a140c49daa8dc07e92ed5a (vhost: /, messages: 0) 2026-02-15 07:11:10.318946 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_847599b93b4249b19524fc01bf19322a (vhost: /, messages: 0) 2026-02-15 07:11:10.318963 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_8a549805a5194ab189c05902397e9a55 (vhost: /, messages: 0) 2026-02-15 07:11:10.318973 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_8c2a44555f134fef8e78d72dff6764d6 (vhost: /, messages: 0) 2026-02-15 07:11:10.319151 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_c9bc7a1b8ab448e694ecd12e3980b3f3 (vhost: /, messages: 0) 2026-02-15 07:11:10.319167 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_cb8205347f5944e58585fe57315c0bcf (vhost: /, messages: 0) 2026-02-15 07:11:10.319281 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_efb994f908394d969c70f793a2ab9166 (vhost: /, messages: 0) 2026-02-15 07:11:10.319386 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-reports-plugin_fanout_ff386cbd966f4744a00f3d2ccc5d147d (vhost: /, messages: 0) 2026-02-15 07:11:10.319464 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions (vhost: /, messages: 0) 2026-02-15 07:11:10.319483 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.319666 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.319680 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.319699 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions_fanout_36417e931f1e49fc843c4ad83f8b63ee (vhost: /, messages: 0) 2026-02-15 07:11:10.319785 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions_fanout_4f417c377ccd437e9db3fad7e44fdbcf (vhost: /, messages: 0) 2026-02-15 07:11:10.319887 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions_fanout_5f0c86b2a7cf4b6aa234deabac49047b (vhost: /, messages: 0) 2026-02-15 07:11:10.319902 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions_fanout_80285f2972d54f318ebac9d65af2f82f (vhost: /, messages: 0) 2026-02-15 07:11:10.320036 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions_fanout_85141d71bb114644930a0df0e65b49fc (vhost: /, messages: 0) 2026-02-15 07:11:10.320143 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions_fanout_8dee8e99773a434997f599dc1ab4f60f (vhost: /, messages: 0) 2026-02-15 07:11:10.320158 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions_fanout_b381dfd1376c41048f3810e2885df950 (vhost: /, messages: 0) 2026-02-15 07:11:10.320233 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions_fanout_c11cb1a2cab040b3b82b5a48f8bfa99f (vhost: /, messages: 0) 2026-02-15 07:11:10.323901 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - q-server-resource-versions_fanout_df26767e793043279426a7b9b8efd4d2 (vhost: /, messages: 0) 2026-02-15 07:11:10.323962 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_03545cb1d8be42f083f2127df4737982 (vhost: /, messages: 0) 2026-02-15 07:11:10.323978 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_19787d7602454ef6b763d427ab8f8bea (vhost: /, messages: 1) 2026-02-15 07:11:10.323991 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_3d8c9ae7a07b4cc288f1561a24970427 (vhost: /, messages: 0) 2026-02-15 07:11:10.324077 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_3f9e26196a904786b3e90cbacadf40dd (vhost: /, messages: 0) 2026-02-15 07:11:10.324087 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_5064071898544ab9976b0c2002d91f40 (vhost: /, messages: 0) 2026-02-15 07:11:10.324101 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_636cc345c96448d8b5bf4b4cb9a1b58f (vhost: /, messages: 0) 2026-02-15 07:11:10.324110 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_6939f128b5ef435482748f812dc93f32 (vhost: /, messages: 0) 2026-02-15 07:11:10.324273 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_879a8ee387ee4bd791c392818a8ba097 (vhost: /, messages: 0) 2026-02-15 07:11:10.324289 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_87b19ceb3d4741688a1914c0f520de36 (vhost: /, messages: 0) 2026-02-15 07:11:10.324486 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_890b63938b5a43599c323c3f7bf76281 (vhost: /, messages: 0) 2026-02-15 07:11:10.324503 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_987418cf83144ce39545d37e566df3e0 (vhost: /, messages: 0) 2026-02-15 07:11:10.324512 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_b149c6d938ef4064af7bca6076ec46f3 (vhost: /, messages: 0) 2026-02-15 07:11:10.324647 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_bd0dd8a7ad4744a791ab694cf565dfbc (vhost: /, messages: 0) 2026-02-15 07:11:10.324662 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_c8d413a197c840279eb82753186d2ee1 (vhost: /, messages: 0) 2026-02-15 07:11:10.324672 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_cc46d0010b3e416dabb9055effc6ff99 (vhost: /, messages: 0) 2026-02-15 07:11:10.324929 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_d60d168451ec43f5898dc4a43d0cd616 (vhost: /, messages: 0) 2026-02-15 07:11:10.324947 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_dfaf2e47528c465dbca6c4ce4b9f435f (vhost: /, messages: 0) 2026-02-15 07:11:10.324956 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_f224f268f32043ac8972eddaf4b40eaf (vhost: /, messages: 0) 2026-02-15 07:11:10.324965 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - reply_f82abeae13a346ea96ff859e08d171c3 (vhost: /, messages: 0) 2026-02-15 07:11:10.325054 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - scheduler (vhost: /, messages: 0) 2026-02-15 07:11:10.325068 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - scheduler.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.325077 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - scheduler.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.325321 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - scheduler.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.325343 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - scheduler_fanout_11a1f3a3910f41c1a7401f42a667a66c (vhost: /, messages: 0) 2026-02-15 07:11:10.325353 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - scheduler_fanout_2a75b3d96f4049ce995000f7b178fec6 (vhost: /, messages: 0) 2026-02-15 07:11:10.325361 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - scheduler_fanout_6809aa09be6c4e6e906674bc4eabcda3 (vhost: /, messages: 0) 2026-02-15 07:11:10.325429 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - scheduler_fanout_9ccd86f2946a4c13828067abe15969d6 (vhost: /, messages: 0) 2026-02-15 07:11:10.325439 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - scheduler_fanout_bb808ab67bc544658cf82ea88cc036b3 (vhost: /, messages: 0) 2026-02-15 07:11:10.325722 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - scheduler_fanout_dd9b761efe674357812f5b5deb6a070e (vhost: /, messages: 0) 2026-02-15 07:11:10.325739 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - worker (vhost: /, messages: 0) 2026-02-15 07:11:10.325749 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - worker.testbed-node-0 (vhost: /, messages: 0) 2026-02-15 07:11:10.325761 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - worker.testbed-node-1 (vhost: /, messages: 0) 2026-02-15 07:11:10.326069 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - worker.testbed-node-2 (vhost: /, messages: 0) 2026-02-15 07:11:10.326086 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - worker_fanout_295f4e3df3044b34b64901f6b350b2e7 (vhost: /, messages: 0) 2026-02-15 07:11:10.326094 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - worker_fanout_2f6ffdea1ae542888a12978f043e5c7b (vhost: /, messages: 0) 2026-02-15 07:11:10.326101 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - worker_fanout_3bcc7b3f2cc1458abac8617aafb22840 (vhost: /, messages: 0) 2026-02-15 07:11:10.326109 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - worker_fanout_61740b57fc3d42df8b3685d26835193e (vhost: /, messages: 0) 2026-02-15 07:11:10.326117 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - worker_fanout_9a667534ed4a4b61ab3c8972d7c84be4 (vhost: /, messages: 0) 2026-02-15 07:11:10.326230 | orchestrator | 2026-02-15 07:11:10 | INFO  |  - worker_fanout_fbda355774f04c88854af88d3c56f393 (vhost: /, messages: 0) 2026-02-15 07:11:10.662414 | orchestrator | + osism migrate rabbitmq3to4 list-exchanges 2026-02-15 07:11:12.700174 | orchestrator | usage: osism migrate rabbitmq3to4 [-h] [--server SERVER] [--dry-run] 2026-02-15 07:11:12.700266 | orchestrator | [--no-close-connections] [--quorum] 2026-02-15 07:11:12.700278 | orchestrator | [--vhost VHOST] 2026-02-15 07:11:12.700288 | orchestrator | [{list,delete,prepare,check}] 2026-02-15 07:11:12.700298 | orchestrator | [{aodh,barbican,ceilometer,cinder,designate,notifications,manager,magnum,manila,neutron,nova,octavia}] 2026-02-15 07:11:12.700308 | orchestrator | osism migrate rabbitmq3to4: error: argument command: invalid choice: 'list-exchanges' (choose from list, delete, prepare, check) 2026-02-15 07:11:13.451872 | orchestrator | ERROR 2026-02-15 07:11:13.452119 | orchestrator | { 2026-02-15 07:11:13.452742 | orchestrator | "delta": "2:05:17.371195", 2026-02-15 07:11:13.452796 | orchestrator | "end": "2026-02-15 07:11:13.039319", 2026-02-15 07:11:13.452824 | orchestrator | "msg": "non-zero return code", 2026-02-15 07:11:13.452858 | orchestrator | "rc": 2, 2026-02-15 07:11:13.452893 | orchestrator | "start": "2026-02-15 05:05:55.668124" 2026-02-15 07:11:13.452918 | orchestrator | } failure 2026-02-15 07:11:13.792472 | 2026-02-15 07:11:13.792715 | PLAY RECAP 2026-02-15 07:11:13.792862 | orchestrator | ok: 30 changed: 11 unreachable: 0 failed: 1 skipped: 6 rescued: 0 ignored: 0 2026-02-15 07:11:13.792939 | 2026-02-15 07:11:14.042290 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/upgrade-stable.yml@main] 2026-02-15 07:11:14.044401 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-15 07:11:14.818649 | 2026-02-15 07:11:14.818876 | PLAY [Post output play] 2026-02-15 07:11:14.836165 | 2026-02-15 07:11:14.836306 | LOOP [stage-output : Register sources] 2026-02-15 07:11:14.905425 | 2026-02-15 07:11:14.905793 | TASK [stage-output : Check sudo] 2026-02-15 07:11:15.743907 | orchestrator | sudo: a password is required 2026-02-15 07:11:15.943555 | orchestrator | ok: Runtime: 0:00:00.012228 2026-02-15 07:11:15.959166 | 2026-02-15 07:11:15.959342 | LOOP [stage-output : Set source and destination for files and folders] 2026-02-15 07:11:16.001139 | 2026-02-15 07:11:16.001450 | TASK [stage-output : Build a list of source, dest dictionaries] 2026-02-15 07:11:16.072407 | orchestrator | ok 2026-02-15 07:11:16.083991 | 2026-02-15 07:11:16.084139 | LOOP [stage-output : Ensure target folders exist] 2026-02-15 07:11:16.540486 | orchestrator | ok: "docs" 2026-02-15 07:11:16.540839 | 2026-02-15 07:11:16.793286 | orchestrator | ok: "artifacts" 2026-02-15 07:11:17.061243 | orchestrator | ok: "logs" 2026-02-15 07:11:17.084737 | 2026-02-15 07:11:17.084912 | LOOP [stage-output : Copy files and folders to staging folder] 2026-02-15 07:11:17.127232 | 2026-02-15 07:11:17.127530 | TASK [stage-output : Make all log files readable] 2026-02-15 07:11:17.418437 | orchestrator | ok 2026-02-15 07:11:17.427737 | 2026-02-15 07:11:17.427885 | TASK [stage-output : Rename log files that match extensions_to_txt] 2026-02-15 07:11:17.462471 | orchestrator | skipping: Conditional result was False 2026-02-15 07:11:17.481489 | 2026-02-15 07:11:17.481681 | TASK [stage-output : Discover log files for compression] 2026-02-15 07:11:17.506532 | orchestrator | skipping: Conditional result was False 2026-02-15 07:11:17.522238 | 2026-02-15 07:11:17.522397 | LOOP [stage-output : Archive everything from logs] 2026-02-15 07:11:17.569583 | 2026-02-15 07:11:17.569780 | PLAY [Post cleanup play] 2026-02-15 07:11:17.578620 | 2026-02-15 07:11:17.578779 | TASK [Set cloud fact (Zuul deployment)] 2026-02-15 07:11:17.645952 | orchestrator | ok 2026-02-15 07:11:17.658468 | 2026-02-15 07:11:17.658598 | TASK [Set cloud fact (local deployment)] 2026-02-15 07:11:17.682477 | orchestrator | skipping: Conditional result was False 2026-02-15 07:11:17.692495 | 2026-02-15 07:11:17.692615 | TASK [Clean the cloud environment] 2026-02-15 07:11:18.300767 | orchestrator | 2026-02-15 07:11:18 - clean up servers 2026-02-15 07:11:19.102255 | orchestrator | 2026-02-15 07:11:19 - testbed-manager 2026-02-15 07:11:19.191142 | orchestrator | 2026-02-15 07:11:19 - testbed-node-5 2026-02-15 07:11:19.286261 | orchestrator | 2026-02-15 07:11:19 - testbed-node-2 2026-02-15 07:11:19.379492 | orchestrator | 2026-02-15 07:11:19 - testbed-node-0 2026-02-15 07:11:19.470571 | orchestrator | 2026-02-15 07:11:19 - testbed-node-3 2026-02-15 07:11:19.593232 | orchestrator | 2026-02-15 07:11:19 - testbed-node-1 2026-02-15 07:11:19.687058 | orchestrator | 2026-02-15 07:11:19 - testbed-node-4 2026-02-15 07:11:19.772717 | orchestrator | 2026-02-15 07:11:19 - clean up keypairs 2026-02-15 07:11:19.794738 | orchestrator | 2026-02-15 07:11:19 - testbed 2026-02-15 07:11:19.818480 | orchestrator | 2026-02-15 07:11:19 - wait for servers to be gone 2026-02-15 07:11:30.754158 | orchestrator | 2026-02-15 07:11:30 - clean up ports 2026-02-15 07:11:30.939456 | orchestrator | 2026-02-15 07:11:30 - 06b258d1-5898-4ef0-abe0-f0a023fda691 2026-02-15 07:11:31.181403 | orchestrator | 2026-02-15 07:11:31 - 06e16ad1-87be-40cf-9670-fc00713741d2 2026-02-15 07:11:31.446238 | orchestrator | 2026-02-15 07:11:31 - 177f3ce5-2668-4ed1-84b4-f39bbcecbe9f 2026-02-15 07:11:31.844785 | orchestrator | 2026-02-15 07:11:31 - 29eb1ea8-030f-4327-83ea-991a9f00817f 2026-02-15 07:11:32.108233 | orchestrator | 2026-02-15 07:11:32 - 68f59e5e-1c55-4aa2-a3d7-5f91cd60fad3 2026-02-15 07:11:32.324103 | orchestrator | 2026-02-15 07:11:32 - 6f2205a6-5fc5-44bf-bcab-983266a12c70 2026-02-15 07:11:32.522329 | orchestrator | 2026-02-15 07:11:32 - 71d7ec8a-009f-4f2d-b5dd-fb578ce37fd0 2026-02-15 07:11:32.790733 | orchestrator | 2026-02-15 07:11:32 - clean up volumes 2026-02-15 07:11:32.910055 | orchestrator | 2026-02-15 07:11:32 - testbed-volume-manager-base 2026-02-15 07:11:32.945846 | orchestrator | 2026-02-15 07:11:32 - testbed-volume-5-node-base 2026-02-15 07:11:32.987218 | orchestrator | 2026-02-15 07:11:32 - testbed-volume-4-node-base 2026-02-15 07:11:33.030232 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-1-node-base 2026-02-15 07:11:33.068671 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-0-node-base 2026-02-15 07:11:33.108503 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-2-node-base 2026-02-15 07:11:33.153691 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-3-node-base 2026-02-15 07:11:33.190903 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-8-node-5 2026-02-15 07:11:33.233605 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-2-node-5 2026-02-15 07:11:33.276880 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-5-node-5 2026-02-15 07:11:33.319144 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-7-node-4 2026-02-15 07:11:33.367689 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-3-node-3 2026-02-15 07:11:33.408980 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-1-node-4 2026-02-15 07:11:33.454830 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-0-node-3 2026-02-15 07:11:33.494820 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-6-node-3 2026-02-15 07:11:33.539826 | orchestrator | 2026-02-15 07:11:33 - testbed-volume-4-node-4 2026-02-15 07:11:33.595595 | orchestrator | 2026-02-15 07:11:33 - disconnect routers 2026-02-15 07:11:33.678153 | orchestrator | 2026-02-15 07:11:33 - testbed 2026-02-15 07:11:34.685688 | orchestrator | 2026-02-15 07:11:34 - clean up subnets 2026-02-15 07:11:34.727285 | orchestrator | 2026-02-15 07:11:34 - subnet-testbed-management 2026-02-15 07:11:34.900305 | orchestrator | 2026-02-15 07:11:34 - clean up networks 2026-02-15 07:11:35.086488 | orchestrator | 2026-02-15 07:11:35 - net-testbed-management 2026-02-15 07:11:35.579138 | orchestrator | 2026-02-15 07:11:35 - clean up security groups 2026-02-15 07:11:35.621268 | orchestrator | 2026-02-15 07:11:35 - testbed-management 2026-02-15 07:11:35.749797 | orchestrator | 2026-02-15 07:11:35 - testbed-node 2026-02-15 07:11:35.855824 | orchestrator | 2026-02-15 07:11:35 - clean up floating ips 2026-02-15 07:11:35.887263 | orchestrator | 2026-02-15 07:11:35 - 81.163.193.145 2026-02-15 07:11:36.260555 | orchestrator | 2026-02-15 07:11:36 - clean up routers 2026-02-15 07:11:36.378316 | orchestrator | 2026-02-15 07:11:36 - testbed 2026-02-15 07:11:37.255495 | orchestrator | ok: Runtime: 0:00:19.156079 2026-02-15 07:11:37.260341 | 2026-02-15 07:11:37.260507 | PLAY RECAP 2026-02-15 07:11:37.260634 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2026-02-15 07:11:37.260745 | 2026-02-15 07:11:37.407122 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2026-02-15 07:11:37.409393 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-15 07:11:38.165641 | 2026-02-15 07:11:38.165833 | PLAY [Cleanup play] 2026-02-15 07:11:38.182121 | 2026-02-15 07:11:38.182251 | TASK [Set cloud fact (Zuul deployment)] 2026-02-15 07:11:38.237215 | orchestrator | ok 2026-02-15 07:11:38.246223 | 2026-02-15 07:11:38.246364 | TASK [Set cloud fact (local deployment)] 2026-02-15 07:11:38.280576 | orchestrator | skipping: Conditional result was False 2026-02-15 07:11:38.291697 | 2026-02-15 07:11:38.291820 | TASK [Clean the cloud environment] 2026-02-15 07:11:39.453252 | orchestrator | 2026-02-15 07:11:39 - clean up servers 2026-02-15 07:11:39.910009 | orchestrator | 2026-02-15 07:11:39 - clean up keypairs 2026-02-15 07:11:39.927655 | orchestrator | 2026-02-15 07:11:39 - wait for servers to be gone 2026-02-15 07:11:39.974518 | orchestrator | 2026-02-15 07:11:39 - clean up ports 2026-02-15 07:11:40.049802 | orchestrator | 2026-02-15 07:11:40 - clean up volumes 2026-02-15 07:11:40.126156 | orchestrator | 2026-02-15 07:11:40 - disconnect routers 2026-02-15 07:11:40.152066 | orchestrator | 2026-02-15 07:11:40 - clean up subnets 2026-02-15 07:11:40.171447 | orchestrator | 2026-02-15 07:11:40 - clean up networks 2026-02-15 07:11:40.350243 | orchestrator | 2026-02-15 07:11:40 - clean up security groups 2026-02-15 07:11:40.388184 | orchestrator | 2026-02-15 07:11:40 - clean up floating ips 2026-02-15 07:11:40.415808 | orchestrator | 2026-02-15 07:11:40 - clean up routers 2026-02-15 07:11:40.838229 | orchestrator | ok: Runtime: 0:00:01.385245 2026-02-15 07:11:40.842112 | 2026-02-15 07:11:40.842266 | PLAY RECAP 2026-02-15 07:11:40.842388 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2026-02-15 07:11:40.842449 | 2026-02-15 07:11:40.964386 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2026-02-15 07:11:40.966921 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-15 07:11:41.721191 | 2026-02-15 07:11:41.721367 | PLAY [Base post-fetch] 2026-02-15 07:11:41.737484 | 2026-02-15 07:11:41.737629 | TASK [fetch-output : Set log path for multiple nodes] 2026-02-15 07:11:41.803793 | orchestrator | skipping: Conditional result was False 2026-02-15 07:11:41.817868 | 2026-02-15 07:11:41.818106 | TASK [fetch-output : Set log path for single node] 2026-02-15 07:11:41.867724 | orchestrator | ok 2026-02-15 07:11:41.876321 | 2026-02-15 07:11:41.876468 | LOOP [fetch-output : Ensure local output dirs] 2026-02-15 07:11:42.361537 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/2104699b13df41e3896cc6406b0490a9/work/logs" 2026-02-15 07:11:42.633522 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2104699b13df41e3896cc6406b0490a9/work/artifacts" 2026-02-15 07:11:42.907116 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2104699b13df41e3896cc6406b0490a9/work/docs" 2026-02-15 07:11:42.922493 | 2026-02-15 07:11:42.922616 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-02-15 07:11:43.847884 | orchestrator | changed: .d..t...... ./ 2026-02-15 07:11:43.848240 | orchestrator | changed: All items complete 2026-02-15 07:11:43.848307 | 2026-02-15 07:11:44.597609 | orchestrator | changed: .d..t...... ./ 2026-02-15 07:11:45.308056 | orchestrator | changed: .d..t...... ./ 2026-02-15 07:11:45.343124 | 2026-02-15 07:11:45.343300 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-02-15 07:11:45.379484 | orchestrator | skipping: Conditional result was False 2026-02-15 07:11:45.382785 | orchestrator | skipping: Conditional result was False 2026-02-15 07:11:45.404470 | 2026-02-15 07:11:45.404596 | PLAY RECAP 2026-02-15 07:11:45.404731 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-02-15 07:11:45.404780 | 2026-02-15 07:11:45.530617 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-02-15 07:11:45.533121 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-15 07:11:46.292242 | 2026-02-15 07:11:46.292416 | PLAY [Base post] 2026-02-15 07:11:46.307072 | 2026-02-15 07:11:46.307219 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-02-15 07:11:47.276749 | orchestrator | changed 2026-02-15 07:11:47.286600 | 2026-02-15 07:11:47.286740 | PLAY RECAP 2026-02-15 07:11:47.286816 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-02-15 07:11:47.286922 | 2026-02-15 07:11:47.412536 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-02-15 07:11:47.414995 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-02-15 07:11:48.198011 | 2026-02-15 07:11:48.198186 | PLAY [Base post-logs] 2026-02-15 07:11:48.209044 | 2026-02-15 07:11:48.209173 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-02-15 07:11:48.679453 | localhost | changed 2026-02-15 07:11:48.697554 | 2026-02-15 07:11:48.697758 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-02-15 07:11:48.724968 | localhost | ok 2026-02-15 07:11:48.729408 | 2026-02-15 07:11:48.729518 | TASK [Set zuul-log-path fact] 2026-02-15 07:11:48.744873 | localhost | ok 2026-02-15 07:11:48.754904 | 2026-02-15 07:11:48.755027 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-02-15 07:11:48.782542 | localhost | ok 2026-02-15 07:11:48.792551 | 2026-02-15 07:11:48.792753 | TASK [upload-logs : Create log directories] 2026-02-15 07:11:49.306526 | localhost | changed 2026-02-15 07:11:49.311407 | 2026-02-15 07:11:49.311555 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-02-15 07:11:49.845404 | localhost -> localhost | ok: Runtime: 0:00:00.007349 2026-02-15 07:11:49.849921 | 2026-02-15 07:11:49.850048 | TASK [upload-logs : Upload logs to log server] 2026-02-15 07:11:50.414417 | localhost | Output suppressed because no_log was given 2026-02-15 07:11:50.417020 | 2026-02-15 07:11:50.417140 | LOOP [upload-logs : Compress console log and json output] 2026-02-15 07:11:50.478028 | localhost | skipping: Conditional result was False 2026-02-15 07:11:50.483384 | localhost | skipping: Conditional result was False 2026-02-15 07:11:50.498122 | 2026-02-15 07:11:50.498351 | LOOP [upload-logs : Upload compressed console log and json output] 2026-02-15 07:11:50.546462 | localhost | skipping: Conditional result was False 2026-02-15 07:11:50.547191 | 2026-02-15 07:11:50.551451 | localhost | skipping: Conditional result was False 2026-02-15 07:11:50.564557 | 2026-02-15 07:11:50.564829 | LOOP [upload-logs : Upload console log and json output]